We propose a method for text-driven perpetual view generation -- synthesizing long videos of arbitrary scenes solely from an input text describing the scene and camera poses. We introduce a novel framework that generates such videos in an online fashion by combining the generative power of a pre-trained text-to-image model with the geometric priors learned by a pre-trained monocular depth prediction model. To achieve 3D consistency, i.e., generating videos that depict geometrically-plausible scenes, we deploy an online test-time training to encourage the predicted depth map of the current frame to be geometrically consistent with the synthesized scene; the depth maps are used to construct a unified mesh representation of the scene, which is updated throughout the generation and is used for rendering. In contrast to previous works, which are applicable only for limited domains (e.g., landscapes), our framework generates diverse scenes, such as walkthroughs in spaceships, caves, or ice castles. Project page: https://scenescape.github.io/
翻译:我们建议一种由文字驱动的永久视图生成方法 -- -- 仅从描述场景和摄像头的输入文本中合成长长长的任意场景视频。我们引入了一个新颖的框架,通过将经过训练的文本到图像模型的基因能力与通过经过训练的单眼深度预测模型所学的几何前科相结合,以在线方式生成这些视频。为了实现3D一致性,即制作描述几何可塑场景的视频,我们设置了在线测试时间培训,鼓励当前框架预测的深度地图与合成场景的几何性一致;深度地图用于构建一个统一的场景网状图,该图在整个一代中更新并用于制作。与以往的工程相比,我们的框架产生多种场景,例如航天器、洞穴或冰城堡中的行走场景,项目网页:https://scenescof.github.io/ 不同,以往的工程只适用于有限的领域(例如景观),我们的框架产生不同的场景,例如空间船、洞穴或冰城堡中的行走。