2025-01-21
arXiv

Continuous 3D Perception Model with Persistent State

Aleksander Holynski , Alexei A. Efros , Qianqian Wang , Yifei Zhang , Angjoo Kanazawa
The paper introduces CUT3R, a stateful recurrent model that continuously updates its state to generate metric-scale pointmaps from a stream of images, enabling coherent and dense 3D scene reconstruction. The model can also infer unseen regions by probing at unobserved views. It is flexible and performs well on various 3D/4D tasks.
We present a unified framework capable of solving a broad range of 3D tasks. Our approach features a stateful recurrent model that continuously updates its state representation with each new observation. Given a stream of images, this evolving state can be used to generate metric-scale pointmaps (per-pixel 3D points) for each new input in an online fashion. These pointmaps reside within a common coordinate system, and can be accumulated into a coherent, dense scene reconstruction that updates as new images arrive. Our model, called CUT3R (Continuous Updating Transformer for 3D Reconstruction), captures rich priors of real-world scenes: not only can it predict accurate pointmaps from image observations, but it can also infer unseen regions of the scene by probing at virtual, unobserved views. Our method is simple yet highly flexible, naturally accepting varying lengths of images that may be either video streams or unordered photo collections, containing both static and dynamic content. We evaluate our method on various 3D/4D tasks and demonstrate competitive or state-of-the-art performance in each. Project Page: https://cut3r.github.io/