3D visual grounding aims to find the objects within point clouds mentioned by free-form natural language descriptions with rich semantic components. However, existing methods either extract the sentence-level features coupling all words, or focus more on object names, which would lose the word-level information or neglect other attributes. To alleviate this issue, we present EDA that Explicitly Decouples the textual attributes in a sentence and conducts Dense Alignment between such fine-grained language and point cloud objects. Specifically, we first propose a text decoupling module to produce textual features for every semantic component. Then, we design two losses to supervise the dense matching between two modalities: the textual position alignment and object semantic alignment. On top of that, we further introduce two new visual grounding tasks, locating objects without object names and locating auxiliary objects referenced in the descriptions, both of which can thoroughly evaluate the model's dense alignment capacity. Through experiments, we achieve state-of-the-art performance on two widely-adopted visual grounding datasets , ScanRefer and SR3D/NR3D, and obtain absolute leadership on our two newly-proposed tasks. The code will be available at https://github.com/yanmin-wu/EDA.
翻译:3D 视觉地面定位的目的是在自由成形自然语言描述和丰富的语义组成部分提到的点云中找到对象。 但是, 现有的方法要么提取句级特征, 将所有单词混合在一起, 要么更多关注对象名称, 这会丢失字级信息, 或忽略其他属性 。 为了缓解这一问题, 我们演示EDA, 将文字属性明确分解为句子中的文字属性, 并在这些细微区分的语言和点云对象之间进行高度对齐。 具体地说, 我们首先提议一个文本脱钩模块, 以生成每个语义组成部分的文本特征。 然后, 我们设计两个损失来监督两种模式之间的密集匹配: 文本位置对齐和对象语义对齐。 除此之外, 我们还引入了两个新的视觉地面定位任务, 定位没有对象名称的物体, 并定位描述中引用的辅助对象, 两者都能够彻底评估模型的稠密校准能力 。 我们通过实验, 在两个广泛传播的地面数据集上实现状态和艺术性性表现。 在 MADA 将获得我们两个新配置的绝对领导 。