Reorienting objects using extrinsic supporting items on the working platform is a meaningful nonetheless challenging manipulation task, considering the elaborate geometry of objects and the robot's motion planning. In this work, the robot learns to reorient objects through sequential pick-and-place operations according to sensing results from the RGBD camera. We propose generative models to predict objects' stable placements afforded by supporting items from observed point clouds. Then, we build manipulation graphs which enclose shared grasp configurations to connect objects' stable placements for pose transformation. We show in experiments that our method is effective and efficient. Simulation experiments demonstrate that the models can generalize to previously unseen pairs of objects started with random poses on the table. The calculated manipulation graphs are conducive to provide collision-free motions to reorient objects. We employ a robot in the real-world experiments to perform sequential pick-and-place operations, indicating our method is capable of transferring objects' placement poses in real scenes.
翻译:使用工作平台上的外部辅助项目重新定位对象是一项有意义的但具有挑战性的操作任务,考虑到对物体的精密几何和机器人的动作规划。 在这项工作中,机器人学会根据 RGBD 相机的遥感结果,通过顺序选取和定位操作来调整物体的方向。 我们提出基因模型来预测从观测到的点云中的辅助项目提供的物体的稳定位置。 然后, 我们建立包含共享控件配置的操纵图, 以连接物体的稳定位置, 以进行配置变形。 我们在实验中显示我们的方法是有效和高效的。 模拟实验表明, 模型可以将先前以随机配置开始的未见对象组合概括为对象。 计算算出的操纵图有助于为调整对象提供无碰撞的动作。 我们在现实世界实验中使用一个机器人来进行顺序选取和定位操作, 表明我们的方法能够将物体的配置转移到真实的场景中。