Realtime Video AI in your MacOS creative tools

Realtime Video AI in your MacOS creative tools

Realtime Video AI in your MacOS creative tools

Daydream Scope

Explore new worlds with Daydream Scope

Check out the latest model drops and powerful integrations.

Download Now

Syphon

When you're building a live visual rig or chaining together multiple creative apps on a Mac, getting a high-resolution video feed out of one piece of software and into another usually means dealing with lag, dropped frames, or clunky screen-capture workarounds. Syphon was built to completely bypass that bottleneck.

Syphon is an open-source macOS protocol that allows different applications to share live video textures with zero latency. It pulls this off by keeping the entire pipeline strictly on the graphics card. Instead of rendering a video frame, copying it to the CPU's memory, and handing it off, the sending app (the Syphon Server) just gives the receiving app (the Syphon Client) the exact coordinates of where that frame already sits in the GPU's memory.

This zero-copy approach means you can pipe heavy, high-resolution generative visuals from a coding environment directly into a VJ app or broadcasting tool without burning through your system's overhead.

With the latest release, Daydream Scope now supports Syphon natively as both an input source and output destination.

How do I use it?

As long as you're running the latest version of Scope on MacOS, you should now see "Syphon" as an input option for models that allow a video source;

Similarly, down in the bottom-right of the app is a toggle to let you control whether or not the app should broadcast its output to Syphon.

Let's see what this looks like in practice!

Multi-app Syphon Chain

We're going to load up two of the most popular creative tools, TouchDesigner and Resolume Arena and try to create a chain with Scope in the middle.

First we'll load up TouchDesigner with the trusty jellybeans demo video, add a Spout Output node and then select TouchDesigner as the source in Scope.

We're using the LongLive model and setting the prompt to be "colourful bugs" to create a slightly unsettling bunch of jellybugs scuttling back and forth.

TouchDesigner -> Scope

Next, we'll send the Scope output into Resolume.

TouchDesigner -> Scope -> Resolume

There's a lot going on here, but we now have TouchDesigner (top left) going into Scope (top right) to be transformed and then pushing into Resolume (bottom) to form part of e.g a VJ performance.

We're really excited to see what people build now that they're easily able to add realtime video AI capabilities into their existing workflows. Let us know what you come up with!