Effective robot learning often requires online human feedback and interventions that can cost significant human time, giving rise to the central challenge in interactive imitation learning: is it possible to control the timing and length of interventions to both facilitate learning and limit burden on the human supervisor? This paper presents ThriftyDAgger, an algorithm for actively querying a human supervisor given a desired budget of human interventions. ThriftyDAgger uses a learned switching policy to solicit interventions only at states that are sufficiently (1) novel, where the robot policy has no reference behavior to imitate, or (2) risky, where the robot has low confidence in task completion. To detect the latter, we introduce a novel metric for estimating risk under the current robot policy. Experiments in simulation and on a physical cable routing experiment suggest that ThriftyDAgger's intervention criteria balances task performance and supervisor burden more effectively than prior algorithms. ThriftyDAgger can also be applied at execution time, where it achieves a 100% success rate on both the simulation and physical tasks. A user study (N=10) in which users control a three-robot fleet while also performing a concentration task suggests that ThriftyDAgger increases human and robot performance by 58% and 80% respectively compared to the next best algorithm while reducing supervisor burden.
翻译:有效的机器人学习往往需要在线人类反馈和干预,这可能会花费大量人的时间,从而在互动模仿学习中产生核心挑战:能否控制干预的时间和长度,以便利学习和限制人类监督者的负担?本文展示了TrifttyDagger,这是一个根据人类干预的理想预算积极询问人监督者的算法。TrifttyDagger使用一项知识化的转换政策,只在足够(1)新奇的状态下才征求干预,因为机器人政策没有模仿的参考行为,或者(2)风险,机器人对任务完成信心低。为了检测后者,我们引入了一种用于估计当前机器人政策下风险的新型指标。模拟和对物理电缆路由实验的实验表明,TrifttyDagger的干预标准比先前的算法更加有效地平衡了任务性与监督者的负担。TrifttyDagger也可以在实施时应用这一政策,在模拟和物理任务上都达到100%的成功率。用户研究(N=10),用户在完成任务时控制三机器人机队,同时进行下一个集中任务,我们引入了一个新的指标。在模拟Dagger 80和机器人操作中分别将降低人的责任和机器人的等级。