In imitation learning, it is common to learn a behavior policy to match an unknown target policy via max-likelihood training on a collected set of target demonstrations. In this work, we consider using offline experience datasets - potentially far from the target distribution - to learn low-dimensional state representations that provably accelerate the sample-efficiency of downstream imitation learning. A central challenge in this setting is that the unknown target policy itself may not exhibit low-dimensional behavior, and so there is a potential for the representation learning objective to alias states in which the target policy acts differently. Circumventing this challenge, we derive a representation learning objective that provides an upper bound on the performance difference between the target policy and a lowdimensional policy trained with max-likelihood, and this bound is tight regardless of whether the target policy itself exhibits low-dimensional structure. Moving to the practicality of our method, we show that our objective can be implemented as contrastive learning, in which the transition dynamics are approximated by either an implicit energy-based model or, in some special cases, an implicit linear model with representations given by random Fourier features. Experiments on both tabular environments and high-dimensional Atari games provide quantitative evidence for the practical benefits of our proposed objective.
翻译:在模仿学习中,通常的做法是学习一种行为政策,通过对收集的一组目标演示进行最大可能的培训来匹配一项未知目标政策。在这项工作中,我们考虑使用离线经验数据集(可能离目标分布很远)来学习低维状态的演示,可以加快下游模仿学习的抽样效率。在这一背景下,一个中心挑战在于未知目标政策本身可能不会表现出低维行为,因此代表学习目标有可能与目标政策有不同作用的别国相近。在挑战中,我们得出一个代表性学习目标,对目标政策与受过最大可能性培训的低维政策之间的性能差异提供上限,而无论目标政策本身是否呈现低维结构,这一约束是紧密的。我们从方法的实用性来看,我们的目标可以作为对比性学习来实施,在这种学习中,过渡动态可以被隐含一种基于能源的模型所近似,或者在某些特殊情况下,一种隐含的线性模型,由随机的四维特征所展示。在表层环境上和高维的游戏中进行实验,为我们所拟议的量化的实验。