We tackle a common scenario in imitation learning (IL), where agents try to recover the optimal policy from expert demonstrations without further access to the expert or environment reward signals. Except the simple Behavior Cloning (BC) that adopts supervised learning followed by the problem of compounding error, previous solutions like inverse reinforcement learning (IRL) and recent generative adversarial methods involve a bi-level or alternating optimization for updating the reward function and the policy, suffering from high computational cost and training instability. Inspired by recent progress in energy-based model (EBM), in this paper, we propose a simplified IL framework named Energy-Based Imitation Learning (EBIL). Instead of updating the reward and policy iteratively, EBIL breaks out of the traditional IRL paradigm by a simple and flexible two-stage solution: first estimating the expert energy as the surrogate reward function through score matching, then utilizing such a reward for learning the policy by reinforcement learning algorithms. EBIL combines the idea of both EBM and occupancy measure matching, and via theoretic analysis we reveal that EBIL and Max-Entropy IRL (MaxEnt IRL) approaches are two sides of the same coin, and thus EBIL could be an alternative of adversarial IRL methods. Extensive experiments on qualitative and quantitative evaluations indicate that EBIL is able to recover meaningful and interpretative reward signals while achieving effective and comparable performance against existing algorithms on IL benchmarks.
翻译:在模仿学习(IL)中,我们处理一种共同的情景,即代理商试图从专家示范中恢复最佳政策,而没有进一步接触专家或环境奖励信号。除了简单行为克隆(BB)采用监督学习,然后是复合错误问题,以往的反强化学习(IRL)和最近的基因对抗方法等解决办法涉及更新奖赏功能和政策的双级或交替优化,受高计算成本和培训不稳定的影响。受基于能源的模型(EBM)最近取得的进展的启发,我们在本文件中提议了一个名为基于能源的模拟学习(EBIL)的简化的IL框架。 EBIL采用一个简单的和灵活的两阶段解决办法,而不是反复更新奖励和政策,而是通过简单的和灵活的两阶段解决办法打破了传统的IRL模式:首先通过得分匹配来估计专家作为代金奖赏功能的能量,然后利用这种奖励来学习政策,强化学习算法。EBBM和占用基准的匹配概念,通过理论分析,我们发现EBL和Max-Enstrostromat Ial的评级是两种可比较的定性方法。