Game-Driven Temporal Coherence for Live AI Video

Game-Driven Temporal Coherence for Live AI Video

Game-Driven Temporal Coherence for Live AI Video

Daydream Scope

Explore new worlds with Daydream Scope

Check out the latest model drops and powerful integrations.

Download Now

AnchorStream is a high-performance bridge between real-time 3D game engines (Unity 6.4) and generative AI layers (Daydream/Scope). It addresses the "Temporal Instability" problem the primary barrier to making AI video truly playable. By using engine-native data as a "geometric anchor," AnchorStream ensures that characters and environments remain stable, allowing for AI-enhanced gameplay that respects the player's input with reduced latency.  

Traditional AI video re-invents every pixel in every frame. AnchorStream shifts this burden by utilizing:

- Engine-Native Motion Vectors: We pass 32-bit motion data directly into the latent space. Instead of the AI "guessing" movement, it "knows" exactly where pixels should go, creating a 1:1 motion-to-pixel lock.

- Dirty-Region Selective Infilling: We only ask the AI to update "dirty" pixels areas where movement has occurred or the background has been disoccluded. This reduces total compute load, enabling high-fidelity output on a wider range of hardware.

- Structural Silhouette Anchoring: Using a secondary camera to extract binary silhouette masks and OpenPose skeleton data, we anchor the AI’s identity to the game’s geometry, preventing pose "hallucinations".

This is still a work in progress, this will be updated as work updates are made!