3D point cloud representation-based view synthesis methods have demonstrated effectiveness. However, existing methods usually synthesize novel views only from a single source view, and it is non-trivial to generalize them to handle multiple source views for pursuing higher reconstruction quality. In this paper, we propose a new deep learning-based view synthesis paradigm, which learns a unified 3D point cloud from different source views. Specifically, we first construct sub-point clouds by projecting source views to 3D space based on their depth maps. Then, we learn the unified 3D point cloud by adaptively fusing points at a local neighborhood defined on the union of the sub-point clouds. Besides, we also propose a 3D geometry-guided image restoration module to fill the holes and recover high-frequency details of the rendered novel views. Experimental results on three benchmark datasets demonstrate that our method outperforms state-of-the-art view synthesis methods to a large extent both quantitatively and visually.
翻译:3D点基于云表的视角合成方法已经证明是有效的。然而,现有方法通常只是从单一来源的角度综合新观点,而将其概括为处理多种来源观点以追求更高的重建质量是非三维的。在本文中,我们提出了一个新的深层次基于学习的视角合成模式,从不同来源的观点中学习统一的 3D 点云。具体地说,我们首先根据深度地图将源视图投射到 3D 空间,从而将源视图投射到三维空间。然后,我们通过在子点云结合所定义的当地社区适应性地模糊点来学习统一的三维点云。此外,我们还提议了一个三维几何制图像修复模块,以填补空洞并恢复新观点的高频率细节。三个基准数据集的实验结果显示,我们的方法在很大程度上在定量和视觉上都超越了最新的观点合成方法。