Becoming

00:00
00:00

Becoming From Concept to a Flexible Generative System

Becoming started as an exploration of impermanence — how continuous human movement can give rise to constantly changing, organic forms generated with AI.Through multiple iterations, testing sessions, and user validations, the project has evolved into a flexible generative system designed not only as an artwork, but as a creative tool for TouchDesigner users who want to explore real-time AI-driven visuals with intention and control.

At its core, Becoming proposes a simple idea:movement never produces the same result twice. Every gesture introduces change, and every output is temporary — always becoming, never fixed.

At the same time, the project opens up a wide range of possibilities for creators who want to integrate StreamDiffusion, real-time interaction, and AI-assisted generation into their own workflows. 

The AI generation powered by StreamDiffusion by DotSimulate

From experimentation to iteration

Throughout the Daydream AI Video Program, the project went through several key iterations, each one informed by hands-on testing and feedback from different users. These iterations helped shape Becoming into a more robust, intuitive, and reusable system.

One of the most important evolutions was the introduction of a WebSocket-based control flow.

In simple terms, this means that any user can now control the generative system in real time directly from their smartphone, simply by opening a URL. No cameras, no external sensors, no cables — just motion, touch, and sound data streamed wirelessly via internet conection into TouchDesigner.

This shift significantly improves accessibility, scalability, and portability, making the experience easier to share, test, and reuse.

An intuitive mobile interface for real-time control

The WebSocket flow is paired with a clear and intuitive mobile UI, designed to give users immediate context and creative agency without overwhelming them.

Through this interface, users can:

  • Control real-time inputs that drive StreamDiffusion.
  • Switch between different visual input modes.
  • Choose from predefined prompt styles, or enter their own style input.
  • Adjust the number of steps, allowing more or less restriction between the AI model and the input image.

This balance between simplicity and depth makes the system approachable for newcomers, while still offering meaningful control for more advanced users.

Freedom for users, intention for creators

One of the core challenges explored in Becoming was how to give users freedom without breaking the aesthetic or conceptual integrity of a project.

To address this, the system introduces a structured prompt framework:

  • Three fully customizable base prompts, editable directly in TouchDesigner.
  • A prefix and suffix system, also customizable by the creator.
  • The user’s input (from the smartphone) is contextualized within this framework, rather than replacing it.

This allows creators to ask:

What if you don’t want a different concept every time someone interacts with your project?

With this setup, creators can preserve a coherent visual and conceptual language, while still allowing users to meaningfully influence the generative process.

Rich sensor data, open-ended possibilities

The system currently receives multiple streams of sensor data from the smartphone, including:

  • Accelerometer data
  • Gravity values on X, Y, and Z axes
  • Microphone input level

At the moment, only a subset of these signals is mapped to visual parameters, but all incoming data is available inside TouchDesigner. Creators are free to use these signals to drive anything they want: geometry, textures, post-processing, AI parameters, or entirely new behaviors.

Modular by design

Becoming was designed to be flexible, not prescriptive.

If you don’t want to use the smartphone + WebSocket control flow, the project includes:

  • A master switch to control everything directly from the TouchDesigner UI.
  • An option to plug in your own input source instead:OSC, Kinect, MIDI, joystick, custom controllers — anything TouchDesigner supports.

This makes the project adaptable to many different contexts, from installations and performances to personal experiments.

Ready to explore

The downloadable TouchDesigner file included in this post is ready to use.You can open it, run it locally, explore the network, customize the prompts, and adapt the system to your own ideas immediately.

Below, you’ll also find a video walkthrough that explains how the system works, how the different control options are connected, and how you can start experimenting with the flow right away.

Becoming is the result of continuous iteration, testing, and refinement. It’s not a finished statement, but a system meant to be explored, modified, and extended.

I’m excited to see how you might use it, break it, or take it in unexpected directions! 

If you have any questions about the project workflow, let me know in the comments. I really appreciate your feedback! (or a star!) 

Attachments
v15
Becoming.zipZIP File