Learning skills by imitation is a promising concept for the intuitive teaching of robots. A common way to learn such skills is to learn a parametric model by maximizing the likelihood given the demonstrations. Yet, human demonstrations are often multi-modal, i.e., the same task is solved in multiple ways which is a major challenge for most imitation learning methods that are based on such a maximum likelihood (ML) objective. The ML objective forces the model to cover all data, it prevents specialization in the context space and can cause mode-averaging in the behavior space, leading to suboptimal or potentially catastrophic behavior. Here, we alleviate those issues by introducing a curriculum using a weight for each data point, allowing the model to specialize on data it can represent while incentivizing it to cover as much data as possible by an entropy bonus. We extend our algorithm to a Mixture of (linear) Experts (MoE) such that the single components can specialize on local context regions, while the MoE covers all data points. We evaluate our approach in complex simulated and real robot control tasks and show it learns from versatile human demonstrations and significantly outperforms current SOTA methods. A reference implementation can be found at https://github.com/intuitive-robots/ml-cur
翻译:----
基于课程的多技能模仿学习
学习通过模仿学习技能是教授机器人的一个很有前途的概念。学习这样的技能的常见方法是通过最大化给定演示的样本的似然来学习参数模型。然而,人类演示通常是多模式的,即用多种方式解决同一任务,这对大多数基于最大似然(ML)目标的模仿学习方法来说是一项主要挑战。ML目标强制模型覆盖所有数据,它阻止了上下文空间中的专业化,并可以导致行为空间中的模式平均,从而导致次优或潜在的灾难性的行为。在这里,我们通过引入一个课程,使用每个数据点的权重来缓解这些问题,允许模型专门处理它可以表示的数据,同时通过熵奖励来激励它尽可能覆盖所有数据。我们将算法扩展到混合(线性)专家(MoE),使单个组件可以专门针对本地上下文区域进行专门化,而MoE则覆盖所有数据点。我们在复杂的模拟和真实机器人控制任务中评估了我们的方法,并展示了它可以从多样化的人类演示中学习,并显著优于当前的SOTA方法。可在https://github.com/intuitive-robots/ml-cur找到参考实现。