Compositional representations of the world are a promising step towards enabling high-level scene understanding and efficient transfer to downstream tasks. Learning such representations for complex scenes and tasks remains an open challenge. Towards this goal, we introduce Neural Radiance Field Codebooks (NRC), a scalable method for learning object-centric representations through novel view reconstruction. NRC learns to reconstruct scenes from novel views using a dictionary of object codes which are decoded through a volumetric renderer. This enables the discovery of reoccurring visual and geometric patterns across scenes which are transferable to downstream tasks. We show that NRC representations transfer well to object navigation in THOR, outperforming 2D and 3D representation learning methods by 3.1% success rate. We demonstrate that our approach is able to perform unsupervised segmentation for more complex synthetic (THOR) and real scenes (NYU Depth) better than prior methods (29% relative improvement). Finally, we show that NRC improves on the task of depth ordering by 5.5% accuracy in THOR.
翻译:为实现这一目标,我们引入了神经辐射现场代码(NRC),这是通过新视角重建学习以物体为中心的表达方式的一种可缩放的方法。NRC学会用通过体积转换器解码的物体代码字典来重建新视角的场景。这样可以发现在可转换到下游任务的场景之间重复出现的视觉和几何模式。我们显示NRC的表达方式将很好地转换到在THOR的天体导航,以3.1%的成功率超过2D和3D的表达式学习方法。我们证明,我们的方法能够对更复杂的合成(THOR)和真实场景(NYU深度)进行不受控制的分割,比以前的方法(29 % 相对改进)。最后,我们表明NRC在深度测量任务上改进了THOR的5.5%的精确度。