Modular robots can be reconfigured to create a variety of designs from a small set of components. But constructing a robot's hardware on its own is not enough -- each robot needs a controller. One could create controllers for some designs individually, but developing policies for additional designs can be time consuming. This work presents a method that uses demonstrations from one set of designs to accelerate policy learning for additional designs. We leverage a learning framework in which a graph neural network is made up of modular components, each component corresponds to a type of module (e.g., a leg, wheel, or body) and these components can be recombined to learn from multiple designs at once. In this paper we develop a combined reinforcement and imitation learning algorithm. Our method is novel because the policy is optimized to both maximize a reward for one design, and simultaneously imitate demonstrations from different designs, within one objective function. We show that when the modular policy is optimized with this combined objective, demonstrations from one set of designs influence how the policy behaves on a different design, decreasing the number of training iterations needed.
翻译:可以对模块机器人进行重新配置,从一组小部件中创建各种设计。但是,单靠自己建造机器人硬件是不够的 -- -- 每个机器人都需要一个控制器。 您可以为某些设计单独创建控制器, 但为额外设计制定政策需要花费时间。 这项工作展示了一种方法, 使用一组设计中的演示来加速政策学习, 以加速额外的设计。 我们利用一个学习框架, 由模块组件组成的图形神经网络, 每个组件对应模块的类型( 如, 腿、 轮或身体), 这些组件可以同时重新组合, 从多个设计中学习。 在本文中, 我们开发了一个组合强化和模仿学习算法。 我们的方法是新颖的, 因为它既优化了对一个设计的奖励, 也同时模仿不同设计的演示, 在一个目标函数中。 我们显示, 当模块政策与这个组合目标优化时, 一个组合的设计演示会影响政策对不同设计的行为方式, 减少所需培训的迭代数 。