Human-robot object handovers have been an actively studied area of robotics over the past decade; however, very few techniques and systems have addressed the challenge of handing over diverse objects with arbitrary appearance, size, shape, and rigidity. In this paper, we present a vision-based system that enables reactive human-to-robot handovers of unknown objects. Our approach combines closed-loop motion planning with real-time, temporally-consistent grasp generation to ensure reactivity and motion smoothness. Our system is robust to different object positions and orientations, and can grasp both rigid and non-rigid objects. We demonstrate the generalizability, usability, and robustness of our approach on a novel benchmark set of 26 diverse household objects, a user study with naive users (N=6) handing over a subset of 15 objects, and a systematic evaluation examining different ways of handing objects. More results and videos can be found at https://sites.google.com/nvidia.com/handovers-of-arbitrary-objects.
翻译:过去十年来,人类机器人物体的交接一直是一个积极研究的机器人领域;然而,很少有技术和系统解决了以任意的外观、大小、形状和僵硬性方式移交各种物体的挑战。在本文中,我们提出了一个基于愿景的系统,使未知物体能够被动地由人类向机器人交接。我们的方法将闭环运动规划与实时的、时间上一致的掌握的一代结合起来,以确保回旋性和运动的平滑性。我们的系统对不同的物体位置和方向十分强大,能够捕捉硬性和非硬性对象。我们展示了我们对26个不同家用物体的新基准集的通用性、可用性和稳健性,与天真的用户进行的一项用户研究(N=6),将一组15个物体交接,以及系统性评估不同的交接物体方式。在https://sites.google.com/nvidia.com/ handovers-odiary-objects上可以找到更多结果和视频。