Reorienting objects using extrinsic supporting items on the working platform is a meaningful, nonetheless challenging manipulation task, considering the elaborate geometry of the objects and the robot's feasible motions. In this work, we propose a pipeline using the RGBD camera's perception results to predict objects' stable placements afforded by supporting items, including a generation stage, a refinement stage, and a classification stage. Then, we construct manipulation graphs that enclose shared grasp configurations to transform objects' stable placements. The robot can reorient objects through sequential pick-and-place operations based on the manipulation graphs. We show in experiments that our approach is effective and efficient. The simulation experiments demonstrate that our pipeline can generalize to novel objects in random start poses on the working platform, generating diverse placements with high accuracy. Moreover, the manipulation graphs are conducive to providing collision-free motions for the robot to reorient objects. We also employ a robot in real-world experiments to perform sequential pick-and-place operations, indicating that our method can transfer objects' placement poses in real scenes.
翻译:使用工作平台上的外部辅助项目重新定位对象是一项有意义的、但具有挑战性的操作任务,考虑到对物体的精密几何和机器人可行的动作。在这项工作中,我们提议使用RGBD相机的感知结果来预测由辅助项目提供的物体的稳定位置,包括一个代代代阶段、一个精细阶段和一个分类阶段。然后,我们构建包含共享控件配置的操纵图,以转换物体的稳定位置。机器人可以通过基于操作图的顺序选取和定位操作调整对象方向。我们在实验中显示,我们的方法是有效和高效的。模拟实验表明,我们的管道可以概括为在工作平台上随机开始展示的新物体,产生高度精确的多种定位。此外,操纵图有助于为机器人调整物体提供无碰撞的动作。我们还在现实世界实验中使用了机器人来进行顺序选取和定位操作,表明我们的方法可以将物体的定位移到真实场中。