Contemporary registration devices for 3D visual information, such as LIDARs and various depth cameras, capture data as 3D point clouds. In turn, such clouds are challenging to be processed due to their size and complexity. Existing methods address this problem by fitting a mesh to the point cloud and rendering it instead. This approach, however, leads to the reduced fidelity of the resulting visualization and misses color information of the objects crucial in computer graphics applications. In this work, we propose to mitigate this challenge by representing 3D objects as Neural Radiance Fields (NeRFs). We leverage a hypernetwork paradigm and train the model to take a 3D point cloud with the associated color values and return a NeRF network's weights that reconstruct 3D objects from input 2D images. Our method provides efficient 3D object representation and offers several advantages over the existing approaches, including the ability to condition NeRFs and improved generalization beyond objects seen in training. The latter we also confirmed in the results of our empirical evaluation.
翻译:3D视觉信息(如LIDARs和各种深度摄像头)的当代登记装置将数据作为3D点云。反过来,这种云层因其大小和复杂程度而难以处理。现有的方法通过将网格安装到点云上并代之而解决这一问题。然而,这种方法导致计算机图形应用中关键对象的可视化和颜色信息降低真实性,并丢失了计算机图形应用中关键对象的彩色信息。在这项工作中,我们提议将3D对象称为神经辐射场(NeRFs)来减轻这一挑战。我们利用超网络模式并培训模型,将3D点云与相关颜色值联系起来,并将NERF网络的重量从输入 2D 图像中返回到重建3D对象。我们的方法提供了高效的3D对象代表,并为现有方法提供了若干优势,包括使NRFs处于条件的能力和在培训中看到的对象之外改进了一般化。后,我们也在经验评估的结果中证实了这一点。