Recently, groundbreaking results have been presented on open-vocabulary semantic image segmentation. Such methods segment each pixel in an image into arbitrary categories provided at run-time in the form of text prompts, as opposed to a fixed set of classes defined at training time. In this work, we present a zero-shot volumetric open-vocabulary semantic scene segmentation method. Our method builds on the insight that we can fuse image features from a vision-language model into a neural implicit representation. We show that the resulting feature field can be segmented into different classes by assigning points to natural language text prompts. The implicit volumetric representation enables us to segment the scene both in 3D and 2D by rendering feature maps from any given viewpoint of the scene. We show that our method works on noisy real-world data and can run in real-time on live sensor data dynamically adjusting to text prompts. We also present quantitative comparisons on the ScanNet dataset.
翻译:最近,关于开放词汇语义图像分割的突破性结果已经被展示出来。这些方法将图像中的每个像素分割成运行时以文本提示的任意类别,而不是在训练时定义的固定类集合。在这项工作中,我们提出了一种零短语体积的开放词汇语义场景分割方法。我们的方法基于这样的洞察,即我们可以将来自视觉语言模型的图像特征融合到神经隐式表示中。我们展示了产生的特征场可以通过将点分配给自然语言文本提示来分割不同的类别。隐式体积表示使我们能够从场景的任何给定视点渲染特征图,从而在3D和2D中分割场景。我们展示了我们的方法在嘈杂的实际数据上可以运行,并可以在实时的传感器数据上动态地调整文本提示。我们还在ScanNet数据集上进行了定量比较。