A robot working in human-centric environments needs to know which kind of objects exist in the scene, where they are, and how to grasp and manipulate various objects in different situations to help humans in everyday tasks. Therefore, object recognition and grasping are two key functionalities for such robots. Most state-of-the-art tackles object recognition and grasping as two separate problems while both use visual input. Furthermore, the knowledge of the robot is fixed after the training phase. In such cases, if the robot faces new object categories, it must retrain from scratch to incorporate new information without catastrophic interference. To address this problem, we propose a deep learning architecture with augmented memory capacities to handle open-ended object recognition and grasping simultaneously. In particular, our approach takes multi-views of an object as input and jointly estimates pixel-wise grasp configuration as well as a deep scale- and rotation-invariant representation as outputs. The obtained representation is then used for open-ended object recognition through a meta-active learning technique. We demonstrate the ability of our approach to grasp never-seen-before objects and to rapidly learn new object categories using very few examples on-site in both simulation and real-world settings.
翻译:在以人为中心的环境中工作的机器人需要知道现场存在哪种物体,它们在哪里,以及如何在不同情况下掌握和操纵各种物体以帮助人类完成日常任务。因此,物体识别和捕捉是这种机器人的两个关键功能。大多数最先进的技术在使用视觉输入的同时将物体识别和捕捉作为两个不同的问题处理。此外,机器人的知识在培训阶段之后就固定下来。在这种情况下,如果机器人面对新的物体类别,它必须从零到零地进行再培训,以纳入新的信息,而不受灾难性干扰。为了解决这一问题,我们提出了一个深层次的学习结构,增加记忆能力,同时处理开放式物体识别和捕捉。特别是,我们的方法将对象的多视角作为输入,共同估计像素明智的掌握配置,以及深度的尺度和旋转变量作为产出。然后,通过一种超时效的学习技术,将获得的表达方式用于开放式物体的识别。我们的方法能够抓住从未见过的物体,并迅速学习新的物体类别,同时在模拟和现实世界中以极少的例子。