We present a Reinforcement Learning (RL) algorithm to solve infinite horizon asymptotic Mean Field Game (MFG) and Mean Field Control (MFC) problems. Our approach can be described as a unified two-timescale Mean Field Q-learning: The same algorithm can learn either the MFG or the MFC solution by simply tuning a parameter. The algorithm is in discrete time and space where the agent not only provides an action to the environment but also a distribution of the state in order to take into account the mean field feature of the problem. Importantly, we assume that the agent can not observe the population's distribution and needs to estimate it in a model-free manner. The asymptotic MFG and MFC problems are presented in continuous time and space, and compared with classical (non-asymptotic or stationary) MFG and MFC problems. They lead to explicit solutions in the linear-quadratic (LQ) case that are used as benchmarks for the results of our algorithm.
翻译:我们提出了一个“加强学习”算法,以解决无限的地平线无症状中位场游戏和中位场控问题。我们的方法可以描述为一个统一的双尺度“中位场”学习:同样的算法可以通过简单的调整参数来学习“MFG”或“MFC”的解决方案。这个算法的时间和空间是分离的,其中代理商不仅对环境提供了行动,而且还提供了国家分布,以便考虑到问题的中位场特征。重要的是,我们假设代理商无法观察人口分布,需要以无模式的方式估算。“无位式MFG”和“MFC”问题在连续的时间和空间中出现,并与经典(非抵押或固定)“MFG”和“MFC”问题进行比较。它们导致在线性赤道(LQ)案中明确的解决办法,后者被用来作为我们算法结果的基准。