Becoming started as an exploration of impermanence — how continuous human movement can give rise to constantly changing, organic forms generated with AI.Through multiple iterations, testing sessions, and user validations, the project has evolved into a flexible generative system designed not only as an artwork, but as a creative tool for TouchDesigner users who want to explore real-time AI-driven visuals with intention and control.
At its core, Becoming proposes a simple idea:movement never produces the same result twice. Every gesture introduces change, and every output is temporary — always becoming, never fixed.
At the same time, the project opens up a wide range of possibilities for creators who want to integrate StreamDiffusion, real-time interaction, and AI-assisted generation into their own workflows.
The AI generation powered by StreamDiffusion by DotSimulate

Throughout the Daydream AI Video Program, the project went through several key iterations, each one informed by hands-on testing and feedback from different users. These iterations helped shape Becoming into a more robust, intuitive, and reusable system.
One of the most important evolutions was the introduction of a WebSocket-based control flow.
In simple terms, this means that any user can now control the generative system in real time directly from their smartphone, simply by opening a URL. No cameras, no external sensors, no cables — just motion, touch, and sound data streamed wirelessly via internet conection into TouchDesigner.
This shift significantly improves accessibility, scalability, and portability, making the experience easier to share, test, and reuse.

The WebSocket flow is paired with a clear and intuitive mobile UI, designed to give users immediate context and creative agency without overwhelming them.
Through this interface, users can:
This balance between simplicity and depth makes the system approachable for newcomers, while still offering meaningful control for more advanced users.

One of the core challenges explored in Becoming was how to give users freedom without breaking the aesthetic or conceptual integrity of a project.
To address this, the system introduces a structured prompt framework:
This allows creators to ask:
What if you don’t want a different concept every time someone interacts with your project?
With this setup, creators can preserve a coherent visual and conceptual language, while still allowing users to meaningfully influence the generative process.

The system currently receives multiple streams of sensor data from the smartphone, including:
At the moment, only a subset of these signals is mapped to visual parameters, but all incoming data is available inside TouchDesigner. Creators are free to use these signals to drive anything they want: geometry, textures, post-processing, AI parameters, or entirely new behaviors.

Becoming was designed to be flexible, not prescriptive.
If you don’t want to use the smartphone + WebSocket control flow, the project includes:
This makes the project adaptable to many different contexts, from installations and performances to personal experiments.

The downloadable TouchDesigner file included in this post is ready to use.You can open it, run it locally, explore the network, customize the prompts, and adapt the system to your own ideas immediately.
Below, you’ll also find a video walkthrough that explains how the system works, how the different control options are connected, and how you can start experimenting with the flow right away.
Becoming is the result of continuous iteration, testing, and refinement. It’s not a finished statement, but a system meant to be explored, modified, and extended.
I’m excited to see how you might use it, break it, or take it in unexpected directions!
If you have any questions about the project workflow, let me know in the comments. I really appreciate your feedback! (or a star!)