Grasping objects of different shapes and sizes - a foundational, effortless skill for humans - remains a challenging task in robotics. Although model-based approaches can predict stable grasp configurations for known object models, they struggle to generalize to novel objects and often operate in a non-interactive open-loop manner. In this work, we present a reinforcement learning framework that learns the interactive grasping of various geometrically distinct real-world objects by continuously controlling an anthropomorphic robotic hand. We explore several explicit representations of object geometry as input to the policy. Moreover, we propose to inform the policy implicitly through signed distances and show that this is naturally suited to guide the search through a shaped reward component. Finally, we demonstrate that the proposed framework is able to learn even in more challenging conditions, such as targeted grasping from a cluttered bin. Necessary pre-grasping behaviors such as object reorientation and utilization of environmental constraints emerge in this case. Videos of learned interactive policies are available at https://maltemosbach.github. io/geometry_aware_grasping_policies.
翻译:以模型为基础的方法可以预测已知对象模型的稳定抓取配置,但它们很难向新对象推广,而且往往以非互动的开放环形方式运作。在这项工作中,我们提出了一个强化学习框架,通过持续控制人体形态机器人手来学习对不同形体和大小不同现实天体的交互抓取。我们探索了几处明确显示物体几何作为政策投入的物体几何方法。此外,我们提议通过签名距离向政策暗示,并表明这自然适合通过形状的奖赏部分指导搜索。最后,我们证明拟议的框架能够在更具挑战性的条件下学习,例如有针对性地从结晶的垃圾箱中抓取。在本案中出现了必要的预刻行为,如物体调整和利用环境制约因素等。在https://maltemosbach.github.io/geology_aware_graping_policies上可以提供学到的交互式政策的视频。