We provide a unifying view of a large family of previous imitation learning algorithms through the lens of moment matching. At its core, our classification scheme is based on whether the learner attempts to match (1) reward or (2) action-value moments of the expert's behavior, with each option leading to differing algorithmic approaches. By considering adversarially chosen divergences between learner and expert behavior, we are able to derive bounds on policy performance that apply for all algorithms in each of these classes, the first to our knowledge. We also introduce the notion of recoverability, implicit in many previous analyses of imitation learning, which allows us to cleanly delineate how well each algorithmic family is able to mitigate compounding errors. We derive two novel algorithm templates, AdVIL and AdRIL, with strong guarantees, simple implementation, and competitive empirical performance.
翻译:我们通过瞬间匹配的镜头,对以前模仿学习算法的大家庭提供了一个统一的观点。 我们的分类计划的核心是,学习者是否试图将专家行为中的(1) 奖赏或(2) 行动价值时刻相匹配,每个选择都会导致不同的算法方法。 通过考虑对立选择的学习者与专家行为之间的差异,我们可以得出适用于每个类中所有算法的政策绩效的界限,首先是我们的知识。 我们还引入了可恢复性的概念,这一概念隐含于许多先前的模仿学习分析中,这使我们能够清晰地描述每个算法家庭能够减少复合错误的好坏。我们从两个新型算法模板AdVIL和AdRIL中获取了强有力的保证、简单的实施和竞争性的经验绩效。