Online imitation learning is the problem of how best to mimic expert demonstrations, given access to the environment or an accurate simulator. Prior work has shown that in the infinite sample regime, exact moment matching achieves value equivalence to the expert policy. However, in the finite sample regime, even if one has no optimization error, empirical variance can lead to a performance gap that scales with $H^2 / N$ for behavioral cloning and $H / \sqrt{N}$ for online moment matching, where $H$ is the horizon and $N$ is the size of the expert dataset. We introduce the technique of replay estimation to reduce this empirical variance: by repeatedly executing cached expert actions in a stochastic simulator, we compute a smoother expert visitation distribution estimate to match. In the presence of general function approximation, we prove a meta theorem reducing the performance gap of our approach to the parameter estimation error for offline classification (i.e. learning the expert policy). In the tabular setting or with linear function approximation, our meta theorem shows that the performance gap incurred by our approach achieves the optimal $\widetilde{O} \left( \min({H^{3/2}} / {N}, {H} / {\sqrt{N}} \right)$ dependency, under significantly weaker assumptions compared to prior work. We implement multiple instantiations of our approach on several continuous control tasks and find that we are able to significantly improve policy performance across a variety of dataset sizes.
翻译:在线模拟学习是模拟专家演示的最佳方法问题,因为有环境或准确模拟器。先前的工作已经表明,在无限的抽样制度中,精确的匹配时间可以实现与专家政策的等值。然而,在有限的抽样制度中,即使没有优化错误,经验差异也会导致绩效差距,在行为性克隆方面,以$H2/N$/N$为尺度,在线性分类方面,以$H是地平线,美元是美元。在专家数据集的大小方面,我们引入了重放估算技术,以减少这种经验性差异:通过在随机模拟器中反复执行缓存的专家行动,我们算出一个更顺畅的专家访问分布估计来匹配。在一般功能接近的情况下,我们证明我们用于离线性分类的参数估计错误(即学习专家政策)的性差是元差。在表格设置或直线性函数的近似值方面,我们元值显示,我们的方法在前期的性差中大大改进了业绩,在前期的值上,在最大范围内,在前期的性差值上,{H2}