The choice of a grasp plays a critical role in the success of downstream manipulation tasks. Consider a task of placing an object in a cluttered scene; the majority of possible grasps may not be suitable for the desired placement. In this paper, we study the synergy between the picking and placing of an object in a cluttered scene to develop an algorithm for task-aware grasp estimation. We present an object-centric action space that encodes the relationship between the geometry of the placement scene and the object to be placed in order to provide placement affordance maps directly from perspective views of the placement scene. This action space enables the computation of a one-to-one mapping between the placement and picking actions allowing the robot to generate a diverse set of pick-and-place proposals and to optimize for a grasp under other task constraints such as robot kinematics and collision avoidance. With experiments both in simulation and on a real robot we demonstrate that with our method, the robot is able to successfully complete the task of placement-aware grasping with over 89% accuracy in such a way that generalizes to novel objects and scenes.
翻译:选择抓取在后续操作任务的成功中起着至关重要的作用。考虑在杂乱的场景中放置物体的任务;大部分可能的抓取都不适合所需的放置。在本文中,我们研究了在杂乱的场景中取放物体之间的协同作用,以开发一种基于任务感知的抓取估计算法。我们提出了一种基于物体中心的动作空间,它编码了布置场景的几何关系和要放置的物体之间的关系,以便直接从布置场景的透视视图提供放置可供性图。此动作空间使计算布置和抓取动作之间的一对一映射成为可能,使机器人可以生成多样化的拾取和放置建议,并在其它任务约束条件(如机器人运动学和碰撞避免)下优化抓取。通过在模拟和真实机器人上的实验,我们证明了我们的方法,机器人能够成功地完成了一个放置感知抓取任务,准确率超过了89%,并且适用于新物体和新场景。