We present radiance field propagation (RFP), a novel approach to segmenting objects in 3D during reconstruction given only unlabeled multi-view images of a scene. RFP is derived from emerging neural radiance field-based techniques, which jointly encodes semantics with appearance and geometry. The core of our method is a novel propagation strategy for individual objects' radiance fields with a bidirectional photometric loss, enabling an unsupervised partitioning of a scene into salient or meaningful regions corresponding to different object instances. To better handle complex scenes with multiple objects and occlusions, we further propose an iterative expectation-maximization algorithm to refine object masks. RFP is one of the first unsupervised approach for tackling 3D real scene object segmentation for neural radiance field (NeRF) without any supervision, annotations, or other cues such as 3D bounding boxes and prior knowledge of object class. Experiments demonstrate that RFP achieves feasible segmentation results that are more accurate than previous unsupervised image/scene segmentation approaches, and are comparable to existing supervised NeRF-based methods. The segmented object representations enable individual 3D object editing operations.
翻译:我们展示了光场传播(RFP),这是在重建过程中对3D中对象进行分解的一种新办法,仅以未贴标签的多视场景图像为背景。RFP来自新兴的神经弧光场技术,这些技术共同用外观和几何来编码语义。我们的方法核心是针对单个物体的光场的新的传播战略,其光度有双向光度损失,使一个场景能够不受监督地分割成与不同对象实例相对应的突出或有意义的区域。为了更好地处理带有多个对象和隐蔽的复杂场景,我们进一步建议用迭代预期-最大化算法来改进对象面具。RFPP是第一个在没有任何监督、说明或3D边框和对对象类别先前了解等其他提示的情况下,在神经光度场中处理3D真实场景对象分割的未受监督的方法中,采用不受监督的方法之一。