In imitation learning from observation IfO, a learning agent seeks to imitate a demonstrating agent using only observations of the demonstrated behavior without access to the control signals generated by the demonstrator. Recent methods based on adversarial imitation learning have led to state-of-the-art performance on IfO problems, but they typically suffer from high sample complexity due to a reliance on data-inefficient, model-free reinforcement learning algorithms. This issue makes them impractical to deploy in real-world settings, where gathering samples can incur high costs in terms of time, energy, and risk. In this work, we hypothesize that we can incorporate ideas from model-based reinforcement learning with adversarial methods for IfO in order to increase the data efficiency of these methods without sacrificing performance. Specifically, we consider time-varying linear Gaussian policies, and propose a method that integrates the linear-quadratic regulator with path integral policy improvement into an existing adversarial IfO framework. The result is a more data-efficient IfO algorithm with better performance, which we show empirically in four simulation domains: using far fewer interactions with the environment, the proposed method exhibits similar or better performance than the existing technique.
翻译:学习代理机构IfO在模仿观测中学习 IfO, 学习代理机构试图模仿一个示范代理机构, 仅使用对所显示行为的观测, 而不使用演示人产生的控制信号。 以对抗性模仿学习为基础的最近方法导致对IfO问题进行最先进的表现, 但是由于依赖数据效率低、不使用模型的强化学习算法,它们通常具有很高的样本复杂性。 这个问题使得它们无法在现实世界环境中部署, 因为在现实世界环境中, 采集样本在时间、 能量和风险方面成本高。 在这项工作中, 我们假设我们可以将基于模型的强化学习理念纳入IfO 的对抗性方法, 以提高这些方法的数据效率, 而不会牺牲业绩。 具体而言, 我们考虑的是时间变化式的线性高斯政策, 并提出一种方法, 将线性差调节者与路径整体性的政策改进纳入现有的对抗性IfO 框架。 其结果是数据效率更高的IO 算法, 我们用经验在四个模拟领域展示: 与环境的相互作用要少得多, 拟议的方法展示类似或比现有技术更好的业绩更好。