We present a robot-to-human object handover algorithm and implement it on a 7-DOF arm equipped with a 3-finger mechanical hand. The system performs a fully autonomous and robust object handover to a human receiver in real-time. Our algorithm relies on two complementary sensor modalities: joint torque sensors on the arm and an eye-in-hand RGB-D camera for sensor feedback. Our approach is entirely implicit, i.e., there is no explicit communication between the robot and the human receiver. Information obtained via the aforementioned sensor modalities is used as inputs to their related deep neural networks. While the torque sensor network detects the human receiver's "intention" such as: pull, hold, or bump, the vision sensor network detects if the receiver's fingers have wrapped around the object. Networks' outputs are then fused, based on which a decision is made to either release the object or not. Despite substantive challenges in sensor feedback synchronization, object, and human hand detection, our system achieves robust robot-to-human handover with 98\% accuracy in our preliminary real experiments using human receivers.
翻译:我们提出机器人对人体物体的交接算法,并用一个7-DOF手臂上安装了配有3指机械手的机械手的机器人对人体物体的交接算法。这个系统运行一个完全自主和稳健的物体实时移交给人体接收器。我们的算法依靠两种互补的传感器模式:手臂上的联合托盘传感器和用于传感器反馈的双向 RGB-D 相机。我们的方法是完全隐含的,即机器人与人体接收器之间没有明确的通信。通过上述传感器获得的信息被用作其相关深神经网络的输入。当托盘传感器网络检测到人体接收器的“意图”时,例如:拉、拉、拉或撞,视像传感器网络检测接收器的手指是否围绕物体。然后将网络的输出连接起来,根据它决定是否释放物体。尽管在传感器反馈同步、物体和人体手探测方面存在实质性挑战,但我们的系统还是实现了强大的机器人对人体的交接,在使用人体接收器进行的初步实际实验中精确度达到98 ⁇ 。