Hometown XR (AI video program)

00:00
00:00

Hometown XR (AI video program)

Daydream Scope

Explore new worlds with Daydream Scope

Check out the latest model drops and powerful integrations.

Download Now

DEC 23

My plan for the AI video program is mainly based on visuals (Step 3 below, and maybe Step 2 as well) in TouchDesigner with StreamDiffusion. Hopefully, I will have a big enough space for projection mapping and hologram (and VR, maybe) experiments during a residency next year. That would be a further step, and I will keep updating it.

For the AI video program plan. I would love to continue working on my ongoing project, Hometown XR. Here are the detailed steps:

1. Extract information about "hometown" from open internet datasets, such as datasets from Common Crawl (long-term)  |  1%

 

2. Feed the information as prompts to StreamDiffusion to generate unstable and constantly changing "memories"  |  70%

2a. Prompt template (AI paraphrase a paragraph describing a memory into a format suitable for a prompt for StreamDiffusion) + Travle Table  |  95%

2b. Voice to text input (AI? For a longer vision for public/socially engaged in real-time) | TBC... (maybe not this time)

 

3. Visuals  |  80%

3a. Particle visuals within StreamDiffusion (2D)  |  95%

3b. Particle visuals with other components/programming in TouchDesigner (3D. For a longer vision for VR development)  |  95%

3c. Face/body capture and feed to StreamDiffusion (Users can have a feeling they are in the memory generation and can interact with it) | 100%

3d. More... Feed to Unreal Engine for showcase | TBC...


4. Further

4a. Projection mapping, hologram, VR, WebXR

4b. Public/socially engaged in real-time

-------------------------------------------------------------------------------------------------------------------------------------------

DEC 27

Reference:

Blooming Watercolor Flowers: https://app.daydream.live/creators/Daydream/blooming-watercolor-flowers

Controllable Clouds: https://app.daydream.live/creators/Daydream/controllable-clouds

Interactive Particle Spray: https://app.daydream.live/creators/Daydream/interactive-particle-spray

Oil painting animation: https://app.daydream.live/creators/Daydream/oil-painting-animation

Body Tracking Flower Guy: https://app.daydream.live/creators/as.ws__/body-tracking-flower-guy

Smoke Simulation: https://app.daydream.live/creators/as.ws__/smoke-simulation

Fiery Dancer Project: https://app.daydream.live/creators/as.ws__/firey-dancer-project

Character Window Project File: https://app.daydream.live/creators/as.ws__/character-window-project-file

Character After Shadow Project: https://app.daydream.live/creators/as.ws__/character-after-shadow-project

Mediapipe: https://github.com/torinmb/mediapipe-touchdesigner?tab=readme-ov-file

Some tutorials about particles on The Interactive & Immersive HQ

Andrew's meet 2 demo

Travel Table from Dotsimulate

-------------------------------------------------------------------------------------------------------------------------------------------

Jan 1

Tried 2 ways for particles:

 

TOPs + Instancing

Questions:

Need to learn and experiment with more manipulation methods to optimize the visual effect.

Tried to add replace in the feedback to add a "smoky" effect, but broke the workflow.

 

POPs + Particles

Questions:

Trying to use POPs to map the generated animation to particles.

Not sure how to use feedback in the workflow.

The generated animation looks jumping from frame to frame. Can I add the feedback, displace, and noise to make it look "smooth"? Like the effect in the meeting 2 demo, interactive particle brush, and watercolor blooming flowers, but I just don’t know how to do that in POPs...

Still learning and experimenting with the manipulation in POPs. It looks like it is easier to manipulate with instancing and TOPs? My next step would be to add facial mocap and body mocap in the workflow to let participants feel like they can control the generated animation within the particle environment.

-------------------------------------------------------------------------------------------------------------------------------------------

Jan 5

I added a camera live feed with a noise backdrop to StreamDiffusionTD. Now the participants could have a more interactive experience with the memory generation.

With Andrew's help, I fixed the travel table drop frames on the StreamDiffusionTD issue. As well as learned more data manipulation ways for visual effects.

I think I have almost finished the TouchDesigner part at the moment. The next and final step would be to feed everything to Unreal Engine for the showcase. Good luck, and I think I will meet Andrew again with questions on Thursday.

-------------------------------------------------------------------------------------------------------------------------------------------

Jan 8 Final Report

Hometown XR is an ongoing project. Through the AI video program, the project evolved from a 2D particle format toward a 3D particle system, and it is being integrated with Unreal Engine.

The final project demo:

It is a real-time interactive storytelling work built on TouchDesigner with StreamDifussionTD and Daydream API.

1. 3D structurally unstable and constantly changing memory building

It started with inputting texts about memories, both extracted from Common Crawl and from the participant's input, to StreamDiffusion for generating endless images as an unstable and constantly changing memory stream. Then it was mapped to a particle system for a 3D structural memory building.

2. Participant interaction building

After adding a camera live feed to StreamDiffusion, the face and body parts from the participant also become components of the memory generation. I intend to let the participants feel they are in the generation (memory), and they can interact with the generation (memory) with their movement.

3. Unreal Engine integration

The last step of this demo is an attempt to feed the particle system and the moving images mapped on the particles from TouchDesigner to Unreal Engine. I intend to have a more efficient and higher-quality 3D building environment in Unreal Engine for the continuation of the project.

It is successfully streaming the position and color data of the particle system via Spout.

But on the Unreal Engine end. I got the shape of the particle system from ToughDesigner (position data), but I am still figuring out how to map the moving images (color data) onto the particle system in Unreal Engine.

One more issue would be balancing the capacity of my GPU, since two particle systems are running simultaneously on TouchDesigner and Unreal Engine.

4. Next step

TouchDesigner and Unreal Engine integration

Verbal to text input (AI)

Output to a monitor, projection mapping, VR, etc.

Attachments
v8