A long-cherished vision in robotics is to equip robots with skills that match the versatility and precision of humans. For example, when playing table tennis, a robot should be capable of returning the ball in various ways while precisely placing it at the desired location. A common approach to model such versatile behavior is to use a Mixture of Experts (MoE) model, where each expert is a contextual motion primitive. However, learning such MoEs is challenging as most objectives force the model to cover the entire context space, which prevents specialization of the primitives resulting in rather low-quality components. Starting from maximum entropy reinforcement learning (RL), we decompose the objective into optimizing an individual lower bound per mixture component. Further, we introduce a curriculum by allowing the components to focus on a local context region, enabling the model to learn highly accurate skill representations. To this end, we use local context distributions that are adapted jointly with the expert primitives. Our lower bound advocates an iterative addition of new components, where new components will concentrate on local context regions not covered by the current MoE. This local and incremental learning results in a modular MoE model of high accuracy and versatility, where both properties can be scaled by adding more components on the fly. We demonstrate this by an extensive ablation and on two challenging simulated robot skill learning tasks. We compare our achieved performance to LaDiPS and HiREPS, a known hierarchical policy search method for learning diverse skills.
翻译:机器人的长期视觉是让机器人掌握与人类多功能性和精密性相匹配的技能。 例如,在玩桌网球时,机器人应该能够以各种方式返回球体,同时准确地将球体置于理想位置。 模拟这种多才多艺行为的常见方法是使用专家混合模型(MOE),每名专家都是背景运动原始的。 然而,学习这种模型具有挑战性,因为大多数目标迫使模型覆盖整个背景空间,从而阻止原始技术的专业化,从而导致相当低质量的组成部分。从最大增压强化学习(RL)开始,我们将目标分解为优化每种混合物组件的较低约束。 此外,我们引入了一种课程,允许组件以当地环境区域为重点,使模型能够学习高度准确的技能表现。为此,我们使用与原始专家一起调整的本地环境分布。 我们的下层支持反复增加新的组成部分,其中新的组成部分将集中于当前模范中未覆盖的本地环境区域。 这种本地和递增学习结果将优化个人组合在模块化的MOE等级上,我们通过高精度和高超度的模化的模版化技能, 学习一种我们所认识的模版化的模版化的模版技术。