This paper studies the task of any objects grasping from the known categories by free-form language instructions. This task demands the technique in computer vision, natural language processing, and robotics. We bring these disciplines together on this open challenge, which is essential to human-robot interaction. Critically, the key challenge lies in inferring the category of objects from linguistic instructions and accurately estimating the 6-DoF information of unseen objects from the known classes. In contrast, previous works focus on inferring the pose of object candidates at the instance level. This significantly limits its applications in real-world scenarios.In this paper, we propose a language-guided 6-DoF category-level object localization model to achieve robotic grasping by comprehending human intention. To this end, we propose a novel two-stage method. Particularly, the first stage grounds the target in the RGB image through language description of names, attributes, and spatial relations of objects. The second stage extracts and segments point clouds from the cropped depth image and estimates the full 6-DoF object pose at category-level. Under such a manner, our approach can locate the specific object by following human instructions, and estimate the full 6-DoF pose of a category-known but unseen instance which is not utilized for training the model. Extensive experimental results show that our method is competitive with the state-of-the-art language-conditioned grasp method. Importantly, we deploy our approach on a physical robot to validate the usability of our framework in real-world applications. Please refer to the supplementary for the demo videos of our robot experiments.
翻译:本文研究通过自由格式语言指令从已知类别中获取任何对象的任务。 此项任务要求使用计算机视觉、 自然语言处理和机器人技术。 我们将这些学科结合到这一公开的挑战中, 这对于人类机器人互动至关重要。 关键的挑战在于从语言指令中推断对象类别, 准确估计已知类别中未见物体的6- DoF信息。 相比之下, 先前的工作重点是从实例一级推断对象候选人的构成。 这大大限制了其在现实世界情景中的应用。 在本文中, 我们建议使用语言引导 6- DoF 类对象本地化模型, 以便通过理解人类意图实现机器人掌握。 为此, 我们提出一个新的两阶段方法。 特别是, 第一阶段将目标定位在RGB图像中, 语言描述已知类别的名称、 属性和空间关系。 第二阶段的提取和部分点云来自我们所选的深度图像, 并估计整个6- DoF 对象在类别中的应用。 在这种方式下, 我们的方法可以定位真实的6- DoF级视频应用, 通过理解人类意图的实地实验方法, 展示我们所使用的一种最先进的实验方法。