The predictive information, the mutual information between the past and future, has been shown to be a useful representation learning auxiliary loss for training reinforcement learning agents, as the ability to model what will happen next is critical to success on many control tasks. While existing studies are largely restricted to training specialist agents on single-task settings in simulation, in this work, we study modeling the predictive information for robotic agents and its importance for general-purpose agents that are trained to master a large repertoire of diverse skills from large amounts of data. Specifically, we introduce Predictive Information QT-Opt (PI-QT-Opt), a QT-Opt agent augmented with an auxiliary loss that learns representations of the predictive information to solve up to 297 vision-based robot manipulation tasks in simulation and the real world with a single set of parameters. We demonstrate that modeling the predictive information significantly improves success rates on the training tasks and leads to better zero-shot transfer to unseen novel tasks. Finally, we evaluate PI-QT-Opt on real robots, achieving substantial and consistent improvement over QT-Opt in multiple experimental settings of varying environments, skills, and multi-task configurations.
翻译:预测性信息,即过去与未来之间的相互信息,被证明是培训强化学习代理人的一种有用的代表学习辅助损失,因为模拟下一步将发生的事情的能力对于许多控制任务的成功至关重要。虽然现有的研究主要限于在模拟中就单一任务设置对专家代理人进行模拟培训,但在这项工作中,我们研究机器人代理人的预测性信息及其对于受过培训以掌握大量大量数据所产生不同技能的大批普通用途代理人的重要性的模型。具体地说,我们引进了预测性信息QT-Opt(PI-QT-Opt),一个QT-OPpt(PI-QT-Opt),一个配有辅助性损失的QT-Opt代理,它学习了预测性信息的表现形式,以便在模拟中用一套参数解决多达297项基于愿景的机器人操纵任务,在现实世界中用一套参数解决。我们证明,预测性信息在培训任务上大大提高了成功率,并导致更好地零向看不见的新任务转移。最后,我们对真实机器人的PI-QT-Opt(PI-Q-Opt)进行实质性改进,在多种实验环境中实现QT-Opt配置、技能和多塔式的实质性和多塔式的大幅改进。