We present NeSF, a method for producing 3D semantic fields from posed RGB images alone. In place of classical 3D representations, our method builds on recent work in implicit neural scene representations wherein 3D structure is captured by point-wise functions. We leverage this methodology to recover 3D density fields upon which we then train a 3D semantic segmentation model supervised by posed 2D semantic maps. Despite being trained on 2D signals alone, our method is able to generate 3D-consistent semantic maps from novel camera poses and can be queried at arbitrary 3D points. Notably, NeSF is compatible with any method producing a density field, and its accuracy improves as the quality of the density field improves. Our empirical analysis demonstrates comparable quality to competitive 2D and 3D semantic segmentation baselines on complex, realistically rendered synthetic scenes. Our method is the first to offer truly dense 3D scene segmentations requiring only 2D supervision for training, and does not require any semantic input for inference on novel scenes. We encourage the readers to visit the project website.
翻译:我们用NeSF来提供3D语义字段,这是仅用制成的 RGB 图像制作3D 语义字段的一种方法; 取代传统的 3D 表达式,我们的方法基于最近隐性神经场景演示中的工作,其中3D 结构通过点函数捕捉。 我们利用这种方法来回收3D 密度场,然后用2D 语义图来训练一个3D 语义分割模型。 尽管我们仅用2D 语义图来训练,但我们的方法仍然能够用新相机显示的3D 相容语义地图来制作,并且可以在任意的 3D 点进行查询。 值得注意的是, NESF 与生成密度场的任何方法都兼容, 随着密度场质量的提高,其准确性也有所提高。 我们的经验分析表明, 在复杂、现实的合成场景中,我们首先可以提供真正密度3D 3D 语义的语义分割模型,只需要2D 监督培训,并且不需要任何语义输入新场景的语义。 我们鼓励读者访问项目网站 。