As the basis for prehensile manipulation, it is vital to enable robots to grasp as robustly as humans. In daily manipulation, our grasping system is prompt, accurate, flexible and continuous across spatial and temporal domains. Few existing methods cover all these properties for robot grasping. In this paper, we propose a new methodology for grasp perception to enable robots these abilities. Specifically, we develop a dense supervision strategy with real perception and analytic labels in the spatial-temporal domain. Additional awareness of objects' center-of-mass is incorporated into the learning process to help improve grasping stability. Utilization of grasp correspondence across observations enables dynamic grasp tracking. Our model, AnyGrasp, can generate accurate, full-DoF, dense and temporally-smooth grasp poses efficiently, and works robustly against large depth sensing noise. Embedded with AnyGrasp, we achieve a 93.3% success rate when clearing bins with over 300 unseen objects, which is comparable with human subjects under controlled conditions. Over 900 MPPH is reported on a single-arm system. For dynamic grasping, we demonstrate catching swimming robot fish in the water.
翻译:作为先发制人操纵的基础,必须使机器人能够像人类一样强有力地掌握。在日常操作中,我们的捕捉系统在空间和时间范围内都是迅速、准确、灵活和连续的。很少有现有方法可以涵盖机器人捕捉的所有这些特性。在本文中,我们提出了一种新的方法来捕捉感知,以使机器人能够掌握这些能力。具体地说,我们制定了一种在空间时空域内具有真实感知和分析标签的密集监督战略。更多的物体中枢意识被融入学习过程,以帮助更好地掌握稳定性。利用观测中枢的抓取通信可以动态抓取跟踪。我们的模型,AnyGrasp,能够产生准确、完全的DOF,密度和时间性吸附的捕捉力能够高效地形成,并有力地应对大深度感测噪音。我们与AnnyGrasp一起,在清除300多个与受控制条件下的人类对象相似的垃圾箱时,我们取得了93.3%的成功率。在单臂系统上报告了900多个MPPH。为了动态抓取,我们展示了在水中游泳机器人鱼。