One of the key promises of model-based reinforcement learning is the ability to generalize using an internal model of the world to make predictions in novel environments and tasks. However, the generalization ability of model-based agents is not well understood because existing work has focused on model-free agents when benchmarking generalization. Here, we explicitly measure the generalization ability of model-based agents in comparison to their model-free counterparts. We focus our analysis on MuZero (Schrittwieser et al., 2020), a powerful model-based agent, and evaluate its performance on both procedural and task generalization. We identify three factors of procedural generalization -- planning, self-supervised representation learning, and procedural data diversity -- and show that by combining these techniques, we achieve state-of-the art generalization performance and data efficiency on Procgen (Cobbe et al., 2019). However, we find that these factors do not always provide the same benefits for the task generalization benchmarks in Meta-World (Yu et al., 2019), indicating that transfer remains a challenge and may require different approaches than procedural generalization. Overall, we suggest that building generalizable agents requires moving beyond the single-task, model-free paradigm and towards self-supervised model-based agents that are trained in rich, procedural, multi-task environments.
翻译:以模型为基础的强化学习的关键承诺之一是,能否推广使用世界内部模型,在新的环境和任务中作出预测。然而,目前的工作侧重于在一般化基准时不使用模型的代理物。这里,我们明确衡量以模型为基础的代理物的普及能力,与不使用模型的对应方相比。我们的分析侧重于穆泽罗(Schrittwieser等人,2020年),一个强大的基于模型的代理物,并评价其在程序和任务一般化方面的绩效。我们查明了程序一般化的三个因素 -- -- 规划、自我监督的代表学习和程序数据多样性 -- -- 并表明,通过综合这些技术,我们实现了无模型的通用性业绩和普罗克根(Cobbe等人,2019年)的数据效率。然而,我们发现这些因素并不总是为Mea-World(Yu等人,2019年)的典型化基准提供同样的好处,表明转让仍是一个挑战,可能需要不同于程序一般化、自我监督的代表性学习以及程序多样性的三种方法。总体而言,我们建议,在经过培训的单一的代理物证上,需要超越一个单一的、经过培训的单一的代理物质的单一的模型。