Check out the latest model drops and powerful integrations.
This is the first stable release of Scope!
This release introduces a plugin system for enabling third-party extensions to provide custom pipelines. These custom pipelines can introduce new models, visual effects and many other capabilities beyond the built-in pipelines already supported in Scope. And every plugin pipeline can immediately be used in the both the Scope UI and API - just write a pipeline that outputs video frames and you'll also be see its live output in the UI and access the stream via a WebRTC API.
The video for this post shows the use of a VFX plugin to create real-time chromatic aberration and VHS/Retro CRT visual effects.
Check out the plugins guide to get started and the plugins directory contains example plugins that can be installed.
We introduced the first real-time preprocessor (Video Depth Anything) in the v0.1.0-beta.3 release and in this release we're introducing more:
These pipelines can all be used as real-time preprocessors to guide VACE V2V.
We've also added support for using pipelines as postprocessors. A preprocessor is a pipeline that runs before the main pipeline, oftentimes augmenting the input video to better influence the generation that happens in the main pipeline. A postprocessor is a pipeline that runs after the main pipeline, oftentimes augmenting the output video before it is viewed.
The first built-in postprocessor is RIFE, a pipeline for real-time frame interpolation which can help smooth out playback and increase video throughput (FPS).
A quick shoutout to Ning Dong who prototyped the RIFE pipeline during the Interactive AI Video Program in December/January. Their work inspired us to bring these features into Scope so others could use them too.
We shared the Overworld plugin a few weeks ago which introduced a pipeline that accepts real-time WASD and mouse controls as input. Any pipeline can now specify that it supports controller input and use WASD and mouse controls as input with their own custom logic.
In a previous release, we introduced reference images with VACE which can be used to stylistically influence generation.
This release introduces first frame and last frame with VACE (oftentimes referred to as FFLF) which enhances what you can do with reference images. The first frame feature allows you to use the image as the exact starting frame such that the model will just generate what happens next. The last frame feature allows you to use the image as the exact ending frame (of the next chunk of video) so the model will generate what happens before (in the next chunk of video). And first frame and last frame can be used together to establish the starting point and ending point of the next chunk of video with the model generating the in-between frames.
Starting with this release, you'll notice a recording button in the UI that can be used to toggle recording on and off. If you toggle recording on before you start a stream, you'll be able to click "Export" and then save the generated video.