Concave Utility Reinforcement Learning (CURL) extends RL from linear to concave utilities in the occupancy measure induced by the agent's policy. This encompasses not only RL but also imitation learning and exploration, among others. Yet, this more general paradigm invalidates the classical Bellman equations, and calls for new algorithms. Mean-field Games (MFGs) are a continuous approximation of many-agent RL. They consider the limit case of a continuous distribution of identical agents, anonymous with symmetric interests, and reduce the problem to the study of a single representative agent in interaction with the full population. Our core contribution consists in showing that CURL is a subclass of MFGs. We think this important to bridge together both communities. It also allows to shed light on aspects of both fields: we show the equivalence between concavity in CURL and monotonicity in the associated MFG, between optimality conditions in CURL and Nash equilibrium in MFG, or that Fictitious Play (FP) for this class of MFGs is simply Frank-Wolfe, bringing the first convergence rate for discrete-time FP for MFGs. We also experimentally demonstrate that, using algorithms recently introduced for solving MFGs, we can address the CURL problem more efficiently.
翻译:中央运动会(MFG)是许多试剂RL(CURL)的连续近似。它们考虑到连续分配相同物剂的极限情况,这些物剂不具有对应利益,并将问题降低到与整个人口互动的单一代表机构的研究范围。我们的核心贡献包括显示CURL是MFG的子类。我们认为这对连接两个社区很重要。它也能够说明这两个领域的各个方面:我们显示了CURL的混凝土和相关的MFG的单调性之间的等同性,CURL的最佳条件和MFG的纳什平衡问题之间的等同性,或者MFG的同类中第一代代表机构(FP)使用FC-MF的FC-MFC(FC-MFC-MFC)也展示了离心力的趋同率。