The ability to discover behaviours from past experience and transfer them to new tasks is a hallmark of intelligent agents acting sample-efficiently in the real world. Equipping embodied reinforcement learners with the same ability may be crucial for their successful deployment in robotics. While hierarchical and KL-regularized RL individually hold promise here, arguably a hybrid approach could combine their respective benefits. Key to these fields is the use of information asymmetry to bias which skills are learnt. While asymmetric choice has a large influence on transferability, prior works have explored a narrow range of asymmetries, primarily motivated by intuition. In this paper, we theoretically and empirically show the crucial trade-off, controlled by information asymmetry, between the expressivity and transferability of skills across sequential tasks. Given this insight, we provide a principled approach towards choosing asymmetry and apply our approach to a complex, robotic block stacking domain, unsolvable by baselines, demonstrating the effectiveness of hierarchical KL-regularized RL, coupled with correct asymmetric choice, for sample-efficient transfer learning.
翻译:从过去的经验中发现行为并将行为转移到新任务的能力是智慧分子在现实世界中高效地进行抽样活动的一个标志。用同样的能力为具有强化成分的学习者提供设备对于成功部署机器人可能是至关重要的。虽然等级和KL正规的RL单独在这里持有承诺,但可以说,混合方法可以将各自的好处结合起来。这些领域的关键是利用信息不对称来弥补所学技能的偏差。虽然不对称选择对可转移性有很大影响,但先前的工作探索了范围狭窄的不对称,主要出于直觉的动机。在本文中,我们从理论上和经验上展示了由信息不对称所控制的跨连续任务技能的表达性和可转移性之间的关键权衡。有原则性的方法是选择不对称,并将我们的方法应用于一个复杂的、机械块堆叠域,靠基线是无法解脱的。这显示了按等级划分的KL正规的RL的有效性,同时有正确的不对称选择,用于抽样高效的转移学习。