Neural rendering has emerged as a powerful paradigm for synthesizing images, offering many benefits over classical rendering by using neural networks to reconstruct surfaces, represent shapes, and synthesize novel views, either for objects or scenes. In this neural rendering, the environment is encoded into a neural network. We believe that these new representations can be used to codify the scene for a mobile robot. Therefore, in this work, we perform a comparison between a trending neural rendering, called tiny-NeRF, and other volume representations that are commonly used as maps in robotics, such as voxel maps, point clouds, and triangular meshes. The target is to know the advantages and disadvantages of neural representations in the robotics context. The comparison is made in terms of spatial complexity and processing time to obtain a model. Experiments show that tiny-NeRF requires three times less memory space compared to other representations. In terms of processing time, tiny-NeRF takes about six times more to compute the model.
翻译:神经成像已成为合成图像的强大范例,通过利用神经网络重建表面、代表形状和合成物体或场景的新观点,为古典成像提供了许多好处。在这个神经成像中,环境被编码成神经网络。我们认为这些新的表象可以用来将移动机器人的场景编成法典。因此,在这项工作中,我们比较了趋势神经成像(称为小NERRF)和机器人通常用作地图的其他体积表象,例如 voxel 地图、点云和三角体模。目标是了解机器人环境中神经表象的利弊。比较是在空间复杂性和处理时间方面进行,以获得模型。实验表明小NERF需要比其他表象少三倍的记忆空间。在处理时间方面,小NERF需要大约六倍的计算模型。