We present SHARP, an approach to photorealistic view synthesis from a single image. Given a single photograph, SHARP regresses the parameters of a 3D Gaussian representation of the depicted scene. This is done in less than a second on a standard GPU via a single feedforward pass through a neural network. The 3D Gaussian representation produced by SHARP can then be rendered in real time, yielding high-resolution photorealistic images for nearby views. The representation is metric, with absolute scale, supporting metric camera movements. Experimental results demonstrate that SHARP delivers robust zero-shot generalization across datasets. It sets a new state of the art on multiple datasets, reducing LPIPS by 25-34% and DISTS by 21-43% versus the best prior model, while lowering the synthesis time by three orders of magnitude. Code and weights are provided at https://github.com/apple/ml-sharp
翻译:我们提出SHARP,一种从单张图像实现逼真视图合成的方法。给定单张照片,SHARP通过神经网络单次前向传播,在标准GPU上以亚秒级速度回归出场景的三维高斯表示参数。该三维高斯表示可实时渲染,为邻近视角生成高分辨率逼真图像。该表示具有绝对尺度的度量特性,支持度量相机运动。实验结果表明,SHARP在不同数据集上展现出稳健的零样本泛化能力。在多个数据集上创造了新的最优性能:相较于先前最佳模型,LPIPS指标降低25-34%,DISTS指标降低21-43%,同时将合成时间缩短三个数量级。代码与权重已发布于https://github.com/apple/ml-sharp