Scope Release v0.1.0-beta.3

00:00
00:00

Scope Release v0.1.0-beta.3

Daydream Scope

Explore new worlds with Daydream Scope

Check out the latest model drops and powerful integrations.

Download Now

This is the first release of 2026! We're finalizing the roadmap towards the first non-beta stable release, but in the meantime wanted to share a couple of updates in v0.1.0-beta.3.i

A quick shoutout to @ddickinson who implemented VACE for Krea and real-time preprocessing with Video Depth Anything in a fork of Scope during the Interactive AI Video Program. His work inspired us to bring these features into Scope so others could use them too. Check out his Video Conductor project which makes use of these features!  

VACE with Krea Realtime Video

The UI and API now support VACE tasks (eg. R2V, VACE V2V) with Krea Realtime Video.

Since Krea is distilled from Wan2.1-T2V-14B, it can be used with any Wan2.1-T2V-14B LoRA. The attached demo video swaps between the following LoRAs:

Get started with VACE and Krea here and follow this guide to bring in LoRAs.

Note: VACE with Krea currently requires a GPU with at least 55GB of VRAM (this number may change in the future with additional optimizations).

Real-time preprocessing with Video Depth Anything

When VACE is enabled in the UI and/or API, a control video (meaning that the video contains "control signals" like depth, pose, flow, etc. instead of normal RGB video) can be passed in so a diffusion model can generate video that follows depth maps over time, poses over time, flow (eg. motion) over time.

This presents two problems:

  1. How do you get a control video?
  2. How do you get a control video for a live feed (eg. webcam, live capture of a canvas, etc.)?

1 can be addressed by creating control videos in other software like ComfyUI, but that is inconvenient and would also not address 2.

v0.1.0-beta.3 introduces a solution to these problems specifically for depth control videos by introducing the concept of real-time preprocessors with Video Depth Anything as the first built-in preprocessor. Instead of creating a depth control video separately, you can just use a normal video (static or live). In the UI and API you can configure Video Depth Anything as a preprocessor which will convert incoming video (whether it be a static uploaded file or a live feed from a webcam) into depth maps in real-time which can then be automatically forwarded to a VACE enabled diffusion pipeline. 

The attached demo video uses depth preprocessing with Krea.

Additional real-time preprocessors can be introduced via plugins which we'll share more about soon!

Get started with Video Depth Anything preprocessing in the UI or follow this guide for configuring it in the API.

Configurable VAEs

The UI and API now support configuring different VAEs (from LightX2V) for autoregressive video models.

These VAEs offer different tradeoffs for quality, speed and memory usage. In some our tests with LongLive V2V (no VACE), we observed a 31-44% improvement in throughput and 24-30% improvement in latency on a Windows 5090. Since the right tradeoff to make depends on the use case, we encourage folks to try out each of the VAEs themselves.

The attached demo video uses the LightVAE VAE.

Get started with these VAEs here or follow this guide for configuring it in the API.

Resources