This paper is concerned with perception challenges for robust grasping in the presence of clutter and unpredictable relative motion between robot and object. Traditional perception systems developed for static grasping are unable to provide feedback during the final phase of a grasp due to sensor minimum range, occlusion, and a limited field of view. A multi-camera eye-in-hand perception system is presented that has advantages over commonly used camera configurations. We quantitatively evaluate the performance on a real robot with an image-based visual servoing grasp controller and show a significantly improved success rate on a dynamic grasping task. A fully reproducible open-source testing system is described to encourage benchmarking of dynamic grasping system performance.
翻译:本文关注在机器人和物体之间出现杂乱和无法预测的相对运动的情况下,强力捕捉的认知挑战。为静态捕捉而开发的传统认知系统由于传感器最小范围、隔离和有限视野,无法在掌握的最后阶段提供反馈。介绍了一个多摄像眼观察系统,它比常用的相机配置有优势。我们量化地评估了具有图像视觉视觉抓抓控制器的真机器人的性能,并显示动态捕捉任务的成功率显著提高。描述了完全可复制的开放源测试系统,以鼓励动态捕捉系统性能的基准化。