Reinforcement learning (RL) in partially observable, fully cooperative multi-agent settings (Dec-POMDPs) can in principle be used to address many real-world challenges such as controlling a swarm of rescue robots or a team of quadcopters. However, Dec-POMDPs are significantly harder to solve than single-agent problems, with the former being NEXP-complete and the latter, MDPs, being just P-complete. Hence, current RL algorithms for Dec-POMDPs suffer from poor sample complexity, which greatly reduces their applicability to practical problems where environment interaction is costly. Our key insight is that using just a polynomial number of samples, one can learn a centralized model that generalizes across different policies. We can then optimize the policy within the learned model instead of the true system, without requiring additional environment interactions. We also learn a centralized exploration policy within our model that learns to collect additional data in state-action regions with high model uncertainty. We empirically evaluate the proposed model-based algorithm, MARCO, in three cooperative communication tasks, where it improves sample efficiency by up to 20x. Finally, to investigate the theoretical sample complexity, we adapt an existing model-based method for tabular MDPs to Dec-POMDPs, and prove that it achieves polynomial sample complexity.
翻译:在部分可见的、完全合作的多试剂环境中的强化学习(RL)原则上可以用来应对许多现实世界的挑战,例如控制救援机器人群或四肢球队。然而,Dec-POMDP比单一试剂问题更难解决,前者是国家执行项目完成的,后者只是P-完成的。因此,目前用于Dec-POMDP的RL算法的样本复杂程度很低,大大降低了其对环境互动费用昂贵的实际问题的适用性。我们的主要见解是,仅仅使用多数值的样本,就可以学习一种跨越不同政策的集中模式。然后,我们可以优化所学模型内的政策,而不是真正的系统,而不需要额外的环境互动。我们还在我们的模式内学习一种集中探索政策,学会在模式不确定性很高的州行动地区收集更多的数据。我们从经验上评价了三种合作通信任务中拟议的模型算法,即MARCO,在其中,它提高了样本效率,到20x的样本-DP。最后,我们通过实验性模型来调查现有的样本-MDP的复杂程度。