Check out the latest model drops and powerful integrations.
This is an experiment running Scope on an IPad (hosted on Runpod). It was interesting to have the mobility, and be able to move around my space. It felt like an AI enabled camera that I can point at anything, and use my imagination to transform it into anything I want.
The pipeline:
* Base model: Krea
* Preprocessor: video depth anything
* VACE: on (automatically uses depth map as input)
* Input: front facing camera (I couldn't swap to the back facing camera in the Scope UI - I think the back facing camera would be more interesting to use)