Imitation learning (IL) is a frequently used approach for data-efficient policy learning. Many IL methods, such as Dataset Aggregation (DAgger), combat challenges like distributional shift by interacting with oracular experts. Unfortunately, assuming access to oracular experts is often unrealistic in practice; data used in IL frequently comes from offline processes such as lead-through or teleoperation. In this paper, we present a novel imitation learning technique called Collocation for Demonstration Encoding (CoDE) that operates on only a fixed set of trajectory demonstrations. We circumvent challenges with methods like back-propagation-through-time by introducing an auxiliary trajectory network, which takes inspiration from collocation techniques in optimal control. Our method generalizes well and more accurately reproduces the demonstrated behavior with fewer guiding trajectories when compared to standard behavioral cloning methods. We present simulation results on a 7-degree-of-freedom (DoF) robotic manipulator that learns to exhibit lifting, target-reaching, and obstacle avoidance behaviors.
翻译:光学学习(IL)是数据效率政策学习的一种常用方法。许多IL方法,如数据集聚合(Dagger),通过与孔径专家互动,应对分布式转换等挑战。不幸的是,假设接触孔径专家在实践中往往不切实际;IL使用的数据经常来自诸如引导或远程操作等离线过程。在本文中,我们展示了一种叫作演示编码(CoDE)合用的新颖的仿真学习技术,该技术仅以固定的轨迹演示形式运作。我们通过引入辅助轨迹网络,从最佳控制的合用技术中获取灵感,避免了反向调整等方法的挑战。我们的方法与标准的行为克隆方法相比,用较少的指导轨迹,非常清楚、更准确地复制了所显示的行为。我们介绍了一个7度自由机器人操纵器的模拟结果,该机械操纵器学习展示提升、目标影响和障碍避免行为。