We introduce Zero-1-to-3, a framework for changing the camera viewpoint of an object given just a single RGB image. To perform novel view synthesis in this under-constrained setting, we capitalize on the geometric priors that large-scale diffusion models learn about natural images. Our conditional diffusion model uses a synthetic dataset to learn controls of the relative camera viewpoint, which allow new images to be generated of the same object under a specified camera transformation. Even though it is trained on a synthetic dataset, our model retains a strong zero-shot generalization ability to out-of-distribution datasets as well as in-the-wild images, including impressionist paintings. Our viewpoint-conditioned diffusion approach can further be used for the task of 3D reconstruction from a single image. Qualitative and quantitative experiments show that our method significantly outperforms state-of-the-art single-view 3D reconstruction and novel view synthesis models by leveraging Internet-scale pre-training.
翻译:我们提出了零至一至三,一种仅给出单张RGB图像即可更改物体摄像机视角的框架。为了在这种不受约束的情况下执行新视角合成,我们利用了大规模扩散模型对自然图像学习的几何先验知识。我们的条件扩散模型使用合成数据集学习相对摄像机视角的控制,这些控制允许在指定的摄像机变换下生成相同物体的新图像。尽管它是在合成数据集上训练的,但我们的模型保留了强大的零样本泛化能力,能够处理分布之外的数据集以及野外图像,包括印象派绘画。我们的视角条件扩散方法还可用于单图像三维重建任务。定性和定量实验表明,我们的方法通过利用互联网规模的预训练显著优于最先进的单视图3D重建和新视角合成模型。