We study policy gradient for mean-field control in continuous time in a reinforcement learning setting. By considering randomised policies with entropy regularisation, we derive a gradient expectation representation of the value function, which is amenable to actor-critic type algorithms, where the value functions and the policies are learnt alternately based on observation samples of the state and model-free estimation of the population state distribution, either by offline or online learning. In the linear-quadratic mean-field framework, we obtain an exact parametrisation of the actor and critic functions defined on the Wasserstein space. Finally, we illustrate the results of our algorithms with some numerical experiments on concrete examples.
翻译:在强化学习环境中,我们不断研究中值场控制的政策梯度。通过考虑随机调整政策,我们得出了价值函数的梯度期望值表示,它适合行为者-批评型算法,在这种算法中,通过离线或在线学习,根据对状态的观察抽样和不使用模型估计人口分布,以离线或不使用模型来学习价值函数和政策。在线性赤道平均字段框架中,我们获得了瓦塞斯坦空间定义的行为者和批评家功能的精确对应性。最后,我们用具体实例的一些数字实验来说明我们的算法结果。</s>