Finding equilibria via gradient play in competitive multi-agent games has been attracting a growing amount of attention in recent years, with emphasis on designing efficient strategies where the agents operate in a decentralized and symmetric manner with guaranteed convergence. While significant efforts have been made in understanding zero-sum two-player matrix games, the performance in zero-sum multi-agent games remains inadequately explored, especially in the presence of delayed feedbacks, leaving the scalability and resiliency of gradient play open to questions. In this paper, we make progress by studying asynchronous gradient plays in zero-sum polymatrix games under delayed feedbacks. We first establish that the last iterate of entropy-regularized optimistic multiplicative weight updates (OMWU) method converges linearly to the quantal response equilibrium (QRE), the solution concept under bounded rationality, in the absence of delays. While the linear convergence continues to hold even when the feedbacks are randomly delayed under mild statistical assumptions, it converges at a noticeably slower rate due to a smaller tolerable range of learning rates. Moving beyond, we demonstrate entropy-regularized OMWU -- by adopting two-timescale learning rates in a delay-aware manner -- enjoys faster last-iterate convergence under fixed delays, and continues to converge provably even when the delays are arbitrarily bounded in an average-iterate manner. Our methods also lead to finite-time guarantees to approximate the Nash equilibrium (NE) by moderating the amount of regularization. To the best of our knowledge, this work is the first that aims to understand asynchronous gradient play in zero-sum polymatrix games under a wide range of delay assumptions, highlighting the role of learning rates separation.
翻译:近几年来,在竞争性多试剂游戏中,通过梯度游戏找到平衡,吸引了越来越多的注意力,重点是设计高效战略,使代理人以分散和对称的方式以分散和对称的方式运作,并保证会趋同。虽然在理解零和二人游戏矩阵游戏方面做出了重大努力,但零和多试剂游戏的性能仍未得到充分探讨,特别是在反馈出现延误的情况下,梯度的可缩放性和弹性仍然容易引起问题。在本文中,我们通过研究非同步的梯度在延迟反馈的情况下以零和多式游戏中玩零和多式组合游戏而取得进展。我们首先确定,最经常的乐观多复制权重更新(OMWU)方法的周期性与四进制反应平衡(QRE)有线性结合,而解决办法则在没有拖延的情况下被捆绑在一起。即使根据温和的统计假设随机延迟,线性趋一致的速度仍然维持在明显下降,由于学习速度的幅度较小。我们甚至超越了正常的周期性乐观性多重度的周期性调整方法,在学习一种固定的周期性延迟,在学习方法下继续使用。