Imitation Learning (IL) methods seek to match the behavior of an agent with that of an expert. In the present work, we propose a new IL method based on a conceptually simple algorithm: Primal Wasserstein Imitation Learning (PWIL), which ties to the primal form of the Wasserstein distance between the expert and the agent state-action distributions. We present a reward function which is derived offline, as opposed to recent adversarial IL algorithms that learn a reward function through interactions with the environment, and which requires little fine-tuning. We show that we can recover expert behavior on a variety of continuous control tasks of the MuJoCo domain in a sample efficient manner in terms of agent interactions and of expert interactions with the environment. Finally, we show that the behavior of the agent we train matches the behavior of the expert with the Wasserstein distance, rather than the commonly used proxy of performance.
翻译:模拟学习( IL) 方法试图将代理人的行为与专家的行为相匹配。 在目前的工作中,我们基于概念上简单的算法提出了一种新的 IL 方法: Primal Wasterstein 模拟学习( PWIL), 它与专家与代理国家行动分布之间的瓦瑟斯坦距离的原始形式相关。 我们展示了一种取自离线的奖励功能, 而不是最近的对抗性 IL 算法, 后者通过与环境的互动学习奖励功能, 并且不需要微调。 我们显示, 我们可以以抽样有效的方式恢复穆约科地区各种连续控制任务的专家行为, 即代理相互作用和专家与环境的互动。 最后, 我们展示了我们所培训的代理人的行为与专家的行为相匹配的瓦瑟斯坦距离, 而不是常用的业绩代理。