Learning from Demonstration (LfD) approaches empower end-users to teach robots novel tasks via demonstrations of the desired behaviors, democratizing access to robotics. However, current LfD frameworks are not capable of fast adaptation to heterogeneous human demonstrations nor the large-scale deployment in ubiquitous robotics applications. In this paper, we propose a novel LfD framework, Fast Lifelong Adaptive Inverse Reinforcement learning (FLAIR). Our approach (1) leverages learned strategies to construct policy mixtures for fast adaptation to new demonstrations, allowing for quick end-user personalization, (2) distills common knowledge across demonstrations, achieving accurate task inference; and (3) expands its model only when needed in lifelong deployments, maintaining a concise set of prototypical strategies that can approximate all behaviors via policy mixtures. We empirically validate that FLAIR achieves adaptability (i.e., the robot adapts to heterogeneous, user-specific task preferences), efficiency (i.e., the robot achieves sample-efficient adaptation), and scalability (i.e., the model grows sublinearly with the number of demonstrations while maintaining high performance). FLAIR surpasses benchmarks across three control tasks with an average 57% improvement in policy returns and an average 78% fewer episodes required for demonstration modeling using policy mixtures. Finally, we demonstrate the success of FLAIR in a table tennis task and find users rate FLAIR as having higher task (p<.05) and personalization (p<.05) performance.
翻译:摘要:演示学习(LfD)方法通过示范所需行为使终端用户能够教授机器人新任务,使机器人技术民主化。然而,当前的LfD框架无法快速适应异质的人类示范,也无法在普及的机器人应用中实现大规模部署。在本文中,我们提出了一种新的LfD框架,快速的终身自适应逆强化学习(FLAIR)。我们的方法(1)利用学习策略构建策略混合物,以快速适应新的示范,实现快速的终端用户个性化,(2)概括示范的共同知识,实现精确的任务推理;以及(3)仅在需要时在终身部署中扩展其模型,通过策略混合物维护一组简明的原型策略,可以近似所有行为。我们在实验中验证了FLAIR的可适应性(即,机器人适应于异质的用户特定任务偏好)、效率(即,机器人实现了样本高效的适应)和可扩展性(即,模型随着示范数的增加而呈亚线性增长,同时保持高性能)。FLAIR在三个控制任务中超过了基准,平均策略回报提高了57%,平均需要的示范建模集合用策略混合物减少了78%。最后,我们演示了FLAIR在乒乓球任务中的成功,并发现用户评价FLAIR具有更高的任务(p<.05)和个性化(p<.05)表现。