We address the task of view synthesis, generating novel views of a scene given a set of images as input. In many recent works such as NeRF (Mildenhall et al., 2020), the scene geometry is parameterized using neural implicit representations (i.e., MLPs). Implicit neural representations have achieved impressive visual quality but have drawbacks in computational efficiency. In this work, we propose a new approach that performs view synthesis using point clouds. It is the first point-based method that achieves better visual quality than NeRF while being 100x faster in rendering speed. Our approach builds on existing works on differentiable point-based rendering but introduces a novel technique we call "Sculpted Neural Points (SNP)", which significantly improves the robustness to errors and holes in the reconstructed point cloud. We further propose to use view-dependent point features based on spherical harmonics to capture non-Lambertian surfaces, and new designs in the point-based rendering pipeline that further boost the performance. Finally, we show that our system supports fine-grained scene editing. Code is available at https://github.com/princeton-vl/SNP.
翻译:我们处理观点合成的任务, 生成给一组图像作为输入的场景的新观点。 在NeRF( Mildenhall等人, 2020年) 等许多近期著作中, 现场几何是使用神经隐含表示法( 即 MLPs) 参数化的。 隐性神经表征取得了令人印象深刻的视觉质量, 但却在计算效率方面有缺陷。 在这项工作中, 我们提出一种新的方法, 利用点云进行视图合成。 这是第一种基于点的方法, 取得比 NERF 更好的视觉质量, 同时速度要快100x。 我们的方法建立在基于不同点的现有作品的基础上, 但也引入了一种我们称之为“ 雕塑神经点( SNP) ” 的新型技术, 大大改进了对重塑点云中错误和洞的坚固性。 我们进一步提议使用基于球体口腔的视点点特征来捕捉非Lambertian 表面, 以及基于点基管道的新设计进一步提升性能。 最后, 我们展示了我们的系统支持精准化的图像- / Sprivlus/ NPERNPED 。 。 。 可在 http:// scolvelus/ scode.</s>