We propose to leverage a real-world, human activity RGB datasets to teach a robot {\em Task-Oriented Grasping} (TOG). On the one hand, RGB-D datasets that contain hands and objects in interaction often lack annotations due to the manual effort in obtaining them. On the other hand, RGB datasets are often annotated with labels that do not provide enough information to infer a 6D robotic grasp pose. However, they contain examples of grasps on a variety of objects for many different tasks. Thereby, they provide a much richer source of supervision than RGB-D datasets. We propose a model that takes as input an RGB image and outputs a hand pose and configuration as well as an object pose and a shape. We follow the insight that jointly estimating hand and object poses increases accuracy compared to estimating these quantities independently of each other. Quantitative experiments show that training an object pose predictor with the hand pose information (and vice versa) is better than training without this information. Given the trained model, we process an RGB dataset to automatically obtain training data for a TOG model. This model takes as input an object point cloud and a task and outputs a suitable region for grasping, given the task. Qualitative experiments show that our model can successfully process a real-world dataset. Experiments with a robot demonstrate that this data allows a robot to learn task-oriented grasping on novel objects.
翻译:我们建议利用一个真实世界的人类活动 RGB 数据集来教授机器人 任务方向的轨迹(TOG) 。 一方面, RGB- D 数据集中包含手和物体的互动往往缺乏说明, 原因是手工努力获得这些数据集。 另一方面, RGB 数据集往往带有附加标签的注释, 标签没有提供足够的信息来推断6D 机器人掌握的外形。 然而, 它们包含关于许多不同任务的不同天体的捕捉实例。 因此, 它们提供了比 RGB- D 数据集更丰富的监督来源。 我们提出了一个模型,作为输入 RGB 图像和输出的手势和配置以及一个对象的形状和形状。 我们遵循的洞见, 共同估计手和物体会提高准确性, 而不是单独估计这些数量。 定量实验表明, 使用手构成信息的物体的预测器( 反之反) 比培训好得多。 根据经过训练的模型, 我们处理一个 RGB 数据集集, 以自动获得为TOG 模型的培训数据配置的模型和输出的图象和形状。 这个模型可以作为输入的模型, 展示一个真实的模型, 展示我们的任务任务任务的模型, 展示一个任务的模型, 展示一个真实的模型, 展示一个任务显示。