Due to recent empirical successes, the options framework for hierarchical reinforcement learning is gaining increasing popularity. Rather than learning from rewards which suffers from the curse of dimensionality, we consider learning an options-type hierarchical policy from expert demonstrations. Such a problem is referred to as hierarchical imitation learning. Converting this problem to parameter inference in a latent variable model, we theoretically characterize the EM approach proposed by Daniel et al. (2016). The population level algorithm is analyzed as an intermediate step, which is nontrivial due to the samples being correlated. If the expert policy can be parameterized by a variant of the options framework, then under regularity conditions, we prove that the proposed algorithm converges with high probability to a norm ball around the true parameter. To our knowledge, this is the first performance guarantee for an hierarchical imitation learning algorithm that only observes primitive state-action pairs.
翻译:由于最近的实证成功,等级强化学习的选项框架越来越受欢迎。我们不是从受维度诅咒的奖赏中学习,而是考虑从专家演示中学习一种选择类型等级政策。这样的问题被称为等级类仿学习。将这一问题转换成潜伏变量模型中的参数推论,我们在理论上将丹尼尔等人(2016年)建议的EM方法定性为中间步骤。人口级算法被分析为中间步骤,由于样本相互关联,这是非边际的。如果专家政策可以用选项框架的变异参数进行参数化,然后在常规条件下,我们证明拟议的算法极有可能与围绕真实参数的规范球相融合。据我们所知,这是仅观察原始状态-动作对子的等级模仿算法的第一个性保证。