We provide new perspectives and inference algorithms for Maximum Entropy (MaxEnt) Inverse Reinforcement Learning (IRL), which provides a principled method to find a most non-committal reward function consistent with given expert demonstrations, among many consistent reward functions. We first present a generalized MaxEnt formulation based on minimizing a KL-divergence instead of maximizing an entropy. This improves the previous heuristic derivation of the MaxEnt IRL model (for stochastic MDPs), allows a unified view of MaxEnt IRL and Relative Entropy IRL, and leads to a model-free learning algorithm for the MaxEnt IRL model. Second, a careful review of existing inference algorithms and implementations showed that they approximately compute the marginals required for learning the model. We provide examples to illustrate this, and present an efficient and exact inference algorithm. Our algorithm can handle variable length demonstrations; in addition, while a basic version takes time quadratic in the maximum demonstration length L, an improved version of this algorithm reduces this to linear using a padding trick. Experiments show that our exact algorithm improves reward learning as compared to the approximate ones. Furthermore, our algorithm scales up to a large, real-world dataset involving driver behaviour forecasting. We provide an optimized implementation compatible with the OpenAI Gym interface. Our new insight and algorithms could possibly lead to further interest and exploration of the original MaxEnt IRL model.
翻译:我们为最大 Entropy (MaxEnt) 反反强化学习提供了新的视角和推算算法(IRL), 提供了一种原则性方法, 以找到与特定专家演示相一致的最非承诺的奖赏功能, 包括许多一致的奖赏功能。 我们首先提出一个通用的 MaxEnt 公式, 其基础是尽量减少 KL 的振动, 而不是最大限度地增加一个 entropy 。 这改进了 MaxEnt IRL 模型和相对 Entropy IRL 模型的统一视图, 并导致为 MaxEnt IRL 模型找到一个最不公开的奖赏功能。 其次, 仔细地审查现有的 MaxEnt 公式和执行情况, 显示它们大致可以容纳学习模型所需的边际值。 我们的算法可以处理不同长度的缩略图; 此外, 一个基本版本需要时间在最大演示长度 L 上的时间方格, 改进了这个算法的缩略度, 将它降低为直线性。