Methods for teaching motion skills to robots focus on training for a single skill at a time. Robots capable of learning from demonstration can considerably benefit from the added ability to learn new movement skills without forgetting what was learned in the past. To this end, we propose an approach for continual learning from demonstration using hypernetworks and neural ordinary differential equation solvers. We empirically demonstrate the effectiveness of this approach in remembering long sequences of trajectory learning tasks without the need to store any data from past demonstrations. Our results show that hypernetworks outperform other state-of-the-art continual learning approaches for learning from demonstration. In our experiments, we use the popular LASA benchmark, and two new datasets of kinesthetic demonstrations collected with a real robot that we introduce in this paper called the HelloWorld and RoboTasks datasets. We evaluate our approach on a physical robot and demonstrate its effectiveness in learning real-world robotic tasks involving changing positions as well as orientations. We report both trajectory error metrics and continual learning metrics, and we propose two new continual learning metrics. Our code, along with the newly collected datasets, is available at https://github.com/sayantanauddy/clfd.
翻译:为机器人教授动作技能的方法通常专注于一次训练一个技能。能够从演示中学习的机器人可以从增加新运动技能的能力中受益,而不会忘记过去学习的东西。为此,我们提出了一种使用超网络和神经常微分方程求解器进行持续学习的方法。我们在实验中证明了该方法在不需要存储从过去的演示中获得的任何数据的情况下,能够有效地记住长串的轨迹学习任务。我们的结果表明,超网络在学习演示方面优于其他最先进的持续学习方法。在我们的实验中,我们使用了流行的LASA基准测试,以及我们在本文中介绍的两个用真实机器人收集的肌感演示数据集HelloWorld和RoboTasks。我们在物理机器人上评估了我们的方法,并展示了它在学习涉及位置和方向变化的真实世界机器人任务方面的有效性。我们报告了轨迹误差指标和持续学习指标,并提出了两个新的持续学习指标。我们的代码以及新收集的数据集可在https://github.com/sayantanauddy/clfd上获得。