Check out the latest model drops and powerful integrations.
The previous steps of the project primarily focus on integrating StreamDiffusion and TouchDesigner for a particle-based presentation of memories. I also attempted to integrate the particle system with Unreal Engine for further development, although the process was not very successful. Details are here: https://app.daydream.live/creators/wenjun/streamdiffusion-particle-ai-video-program
For this AI video program with Scope, I plan to continue working on the memory project, exploring Scope with LoRAs, VACE, and plugins for the control and manipulation of the video generation. The video will be fed to TouchDesigner for particleization and interaction at last.
-------------------------------------------------------------------------------
2.19
After days of messing around with Scope (local/remote/Runpod, UI, LoRAs, plugins, etc.), finally, I think I figured out what I can do with Scope at the moment.
The main concern is that I don't have a very good GPU, so I can only rely on running Scope with remote inference or Runpod. So if there is a network issue, the entire thing seems would not work at all. If I can run things locally as a backup, that would be great.
Since I can't use LoRAs and plugins when running Scope with remote inference at the moment, I run Scope on Runpod. In this way, I need to figure out the communication between Runpod Scope and local TouchDesigner on the PC.
It is not a problem to send a video stream from local TD to Runpod Scope via NDI virtual webcam. But I got stuck on how to send the video generation from Runpod Scope back to local TD... I have to finish my work in local TD at last...
Finally, with an entire day working with vibe coding. I barely solved this at last. Though I am still not sure what exactly is happening, it works ok. I got the generation video from Runpod Scope to local TD. The keywords in the workflow include: HD Webcam, Virtual Webcam, OBS, NDI, Network Bridge, Tunnel, Localhost, webBrowser Component, Web Render TOP, Hardware Lock, Camera Injection... I have attached the files here as a reference. I can see it is not a very good method, but at least I can continue the TD part for my work tomorrow, finally! Do let me know if you guys have a better method later!
By the way, along with the messing around, I found somethings I may continue working on with Scope.
1. It seems there are not many (good) LoRAs out there, so I may need to train LoRAs. Training 1.3B LoRAs seems not to be a very heavy work...
2. Plugins. I made a simple story sequencer tool with vibe coding, though it is not integrated into Scope successfully. And also like the Runpod Scope to local TD workflow, could it also become a plugin? But as I don't have a strong coding background, it doesn't seem very efficient for me to make plugins. Though I have to try if I need a tool... like the Runpod Scope to local TD workflow...
3. Prompts and timeline. I still had the mindset that the story(video) sequence would look like scene 1 fade(cut) into scene 2. But actually, the way Scope (real-time video generation) works is more like a continuous shot, uncut movie. Like the movie, Birdman. So I should let LLM be aware of this, and generate prompts works for a continuous shot, uncut movie.
To be continued...
I am happy I can continue working on visual things on TD tomorrow!
---------------------------------------------------------------------------------------
2.20
It is time to wrap things up. These two weeks are an exploration period with Scope on my ongoing project, Hometown XR.
Now the project is developing in three modes.
First, plain storytelling mode. (I might not have a good network this morning)
Stories are extracted from open internet datasets from Common Crawl. I am still working on a story extracting script, with that, hopefully, I don't need to extract stories manually
These videos are generated by the Memflow model with the Daydream Scope LoRAs (scale 1) https://civitai.com/models/2383884/daydream-scope-loras
I think I may try training 1.3B LoRAs to explore style control.
Second, interactive storytelling mode.
I added a webcam with a noise background as the video input, with the Menflow model, gray preprocessor, and the Daydream Scope LoRAs (scale 1) (VACE off for higher FPS). I intend to let the participants build and interact with memories in real-time.
I still need to build the voice-to-text prompt inputting script, and experiment preprocessor and VACE, to gain control of the video generation.
Third, particle interactive mode.

Based on the previous build, I added a joystick control in TouchDesigner, so now I can do a pretty smooth 3D navigation in TD, and don't need to feed particles to Unreal Engine anymore. I intend to develop this into VR, installation, and projection mapping with viewer interaction.
I still need to advance the joystick control and try to build the VR. I will try to present this in a physical space as intallation and projection mapping in the coming residency. I have a plan to try more sensors, like the Kinect, for distance and movement interaction.