Daydream

Daydream

World Models & Interactive Video

Rewinding the Realtime Video AI Summit

Rewinding the Realtime Video AI Summit

On October 20, 2025, Daydream brought together the leading voices in real-time AI and generative video by hosting the first-of-its-kind Realtime Video AI Summit.

Held at Gray Area in San Francisco during Open Source AI Week, the summit united over 100 researchers, developers, and creative technologists from around the world to exchange ideas and explore the future of open, real-time video AI.

Across a full day of talks, installations, and workshops, the summit spanned topics from new diffusion pipelines and world-model research, to creative workflows and live performance systems — all reflecting how quickly the video field is moving and the necessity for a space for open collaboration.


Featured Talks

The program featured key researchers in video AI research:

  • “StreamDiffusion V2” by Chenfeng Xu (UT Austin)
  • “Towards Video World Models” by Xun Huang (CMU)
  • “StreamV2V and Recent Advancements” by Jeff Liang (Meta)
  • "StreamDiffusionTD" by DotSimulate

🎥 Missed the event IRL? Watch all the recordings here!


Reflections

“WOW what a phenomenal event you hosted today. I’ve attended a lot of events lately, but the speakers you curated helped shed my jadedness and genuinely made me excited about how AI can make us more human.”
Stacie C.

What began as an open experiment has developed into a growing ecosystem where researchers, artists, and builders work side by side on the future of live AI media.


Looking Ahead

And this is only the beginning. Planning for the Realtime Video AI Summit 2026 is already underway — with expanded research tracks and new opportunities for collaboration.

If you’re developing new world models, building open video tools, or exploring creative applications of real-time video AI, this is your chance to showcase your work.

📩 Interested in sponsoring or speaking at our next Realtime Video AI Summit? Sign up here.


Stay Connected