Robotic applications require both correct task performance and compensation for undefined behaviors. Although deep learning is a promising approach to perform complex tasks, the response to undefined behaviors that are not reflected in the training dataset remains challenging. In a human-robot collaborative task, the robot may adopt an unexpected posture due to collisions and other unexpected events. Therefore, robots should be able to recover from disturbances for completing the execution of the intended task. We propose a compensation method for undefined behaviors by switching between two controllers. Specifically, the proposed method switches between learning-based and model-based controllers depending on the internal representation of a recurrent neural network that learns task dynamics. We applied the proposed method to a pick-and-place task and evaluated the compensation for undefined behaviors. Experimental results from simulations and on a real robot demonstrate the effectiveness and high performance of the proposed method.
翻译:机器人应用既要求正确的任务性能,也需要对未定义的行为进行补偿。 虽然深层次的学习是执行复杂任务的一个很有希望的方法,但对未在培训数据集中反映的未定义行为的反应仍然具有挑战性。 在人类机器人合作的任务中,机器人可能会由于碰撞和其他意外事件而采取出乎意料的姿态。因此,机器人应该能够从干扰中恢复过来,完成预定任务的执行。我们提出一种补偿方法,通过在两个控制器之间转换来补偿未定义的行为。具体地说,根据学习基础和模型控制器的内部表现,在学习基础和模型控制器之间拟议的方法开关取决于学习任务动态的经常性神经网络的内部表现。我们将拟议方法应用于一个选择和地点的任务,并评估对未定义行为的补偿。模拟和真正的机器人的实验结果证明了拟议方法的有效性和高性能。