Humans describe the physical world using natural language to refer to specific 3D locations based on a vast range of properties: visual appearance, semantics, abstract associations, or actionable affordances. In this work we propose Language Embedded Radiance Fields (LERFs), a method for grounding language embeddings from off-the-shelf models like CLIP into NeRF, which enable these types of open-ended language queries in 3D. LERF learns a dense, multi-scale language field inside NeRF by volume rendering CLIP embeddings along training rays, supervising these embeddings across training views to provide multi-view consistency and smooth the underlying language field. After optimization, LERF can extract 3D relevancy maps for a broad range of language prompts interactively in real-time, which has potential use cases in robotics, understanding vision-language models, and interacting with 3D scenes. LERF enables pixel-aligned, zero-shot queries on the distilled 3D CLIP embeddings without relying on region proposals or masks, supporting long-tail open-vocabulary queries hierarchically across the volume. The project website can be found at https://lerf.io .
翻译:人类描述物理世界,使用自然语言在各种特性:视觉外观、语义学、抽象关联或可操作的发价等广泛特性的基础上,指向特定的3D地点:视觉外观、语义学、抽象关联或可操作的发价。在这项工作中,我们提议了语言嵌入的辐射场(LERFs),这是一种将诸如CLIP等现成模型的语言嵌入到NERF中的一种方法,使3D中这类类型的开放语言查询得以在3D中进行。 LERF通过在培训射线上进行体积化 CLIP嵌入的卷积来学习一个密集、多尺度的语言域,监督这些嵌入到培训视图中,以提供多视图一致性和平滑基础语言字段。在优化后,LERFs可以提取用于广泛语言的三维相近地图,实时地模拟,这有可能在机器人中使用案例,理解视觉语言模型,并与3D场景进行互动。 LERFRFs在蒸馏的3D CLIP嵌入式的嵌入中进行像lerfler,不依赖区域提案或口罩,支持长序/ opliflibalcalsultalsutionalalsultaldition网站。</s>