MemoryScapes — interactive audiovisual system for free improvisation with AI

00:00
00:00

MemoryScapes — interactive audiovisual system for free improvisation with AI

Daydream Scope

Explore new worlds with Daydream Scope

Check out the latest model drops and powerful integrations.

Download Now

 MemoryScapes — interactive audiovisual system for free improvisation with AI

MemoryScapes is a real-time audiovisual performance system where live acoustic instruments, electronics, and performer movement drive generative visuals through an AI pipeline.

The system combines Max/MSP, TouchDesigner, Kinect v2, and Scope into a unified performance environment. Audio from live instruments is analyzed and processed in Max/MSP, generating both electronic sound and control data. Performer movement is captured via Kinect and influences both sound and visual behavior.

This data is sent to TouchDesigner, where an audio-reactive visual structure is generated in real time. The resulting video stream is then passed to Scope, where it is transformed using prompt-based AI video diffusion. The processed output returns to TouchDesigner, where it is combined with the original generative visuals and the performer’s silhouette to create the final image.

Pipline

Pipline

Pipeline:

Audio + Kinect → Max/MSP (analysis, synthesis, OSC control) → TouchDesigner (audio-reactive visuals) → Scope (AI video transformation via prompts) → TouchDesigner (final compositing and projection)

The system is fully real-time and supports expressive control through sound and gesture, allowing flexible interaction between performer, system, and visual output.

Conceptually, MemoryScapes explores music as a space of memory — how performance can activate layers of personal and collective memory that unfold visually around the performer. The project is connected to my ongoing research into memory, perception, and the transmission of information, including the emergence of perceptual illusions in audiovisual systems.

Current status:

Working real-time prototype with stable audio-visual interaction.

Future development:

The system will be further developed into a full-scale audiovisual performance (approx. 30 minutes) involving multiple acoustic instruments (guitar, violin, flute, percussion), extended electronic processing, and expanded interaction through performer movement. Future iterations may also include additional performers (e.g., a dancer) interacting with the system via Kinect, as well as refinement of the visual language and compositional structure.

TouchDesigner

TouchDesigner

Max/MSP

Max/MSP

About the author  

My name is Leonid Zvolinsky. I am a composer and media artist based in Japan, working with sound, live electronics, and interactive audiovisual systems. My work focuses on real-time performance environments combining acoustic instruments, electronics, and generative visuals.