We study the policy evaluation problem in multi-agent reinforcement learning where a group of agents, with jointly observed states and private local actions and rewards, collaborate to learn the value function of a given policy via local computation and communication over a connected undirected network. This problem arises in various large-scale multi-agent systems, including power grids, intelligent transportation systems, wireless sensor networks, and multi-agent robotics. When the dimension of state-action space is large, the temporal-difference learning with linear function approximation is widely used. In this paper, we develop a new distributed temporal-difference learning algorithm and quantify its finite-time performance. Our algorithm combines a distributed stochastic primal-dual method with a homotopy-based approach to adaptively adjust the learning rate in order to minimize the mean-square projected Bellman error by taking fresh online samples from a causal on-policy trajectory. We explicitly take into account the Markovian nature of sampling and improve the best-known finite-time error bound from $O(1/\sqrt{T})$ to~$O(1/T)$, where $T$ is the total number of iterations.
翻译:我们研究多试剂强化学习中的政策评价问题,即一组代理机构,与共同观察的州和私营地方行动和奖励一起,通过一个连接的无方向网络,通过当地计算和通信,合作学习某项政策的价值功能。这个问题出现在各种大型多试剂系统中,包括电网、智能运输系统、无线传感器网络和多试剂机器人系统。当国家行动空间的维度很大时,广泛使用线性功能近似的时间差异学习。在本文中,我们开发了一种新的分布式时间差异学习算法,并量化了它的有限时间性性能。我们的算法将一种分布式的随机原始-双向方法与一个基于同质式的方法结合起来,以适应性地调整学习率,以便通过从政策上的因果关系轨迹中提取新的在线样本,最大限度地减少预测的贝尔曼错误。我们明确考虑到取样的Markovian性质,并将从$O(1/Sqrt{T}美元增加到$1(1/T$)美元。