Synthesizing photo-realistic images from a point cloud is challenging because of the sparsity of point cloud representation. Recent Neural Radiance Fields and extensions are proposed to synthesize realistic images from 2D input. In this paper, we present Point2Pix as a novel point renderer to link the 3D sparse point clouds with 2D dense image pixels. Taking advantage of the point cloud 3D prior and NeRF rendering pipeline, our method can synthesize high-quality images from colored point clouds, generally for novel indoor scenes. To improve the efficiency of ray sampling, we propose point-guided sampling, which focuses on valid samples. Also, we present Point Encoding to build Multi-scale Radiance Fields that provide discriminative 3D point features. Finally, we propose Fusion Encoding to efficiently synthesize high-quality images. Extensive experiments on the ScanNet and ArkitScenes datasets demonstrate the effectiveness and generalization.
翻译:由于点云表示的稀疏性,从点云合成真实感图像是具有挑战性的。最近,提出了基于神经辐射场的方法及其扩展来从 2D 输入合成逼真图像。在本文中,我们提出了 Point2Pix 作为一种新颖的点云渲染器,将 3D 稀疏点云与 2D 密集图像像素相连。利用点云 3D 先验和 NeRF 渲染流程,我们的方法可以从彩色点云中合成高质量的图像,通常用于新颖室内场景。为了提高光线采样的效率,我们提出了点引导采样,该方法关注有效采样。另外,我们提出了点编码来构建多尺度辐射场,提供有区别的 3D 点特征。最后,我们提出了融合编码来高效地合成高质量图像。对 ScanNet 和 ArkitScenes 数据集进行了广泛的实验,证明了方法的有效性和泛化性。