We consider online no-regret learning in unknown games with bandit feedback, where each agent only observes its reward at each time -- determined by all players' current joint action -- rather than its gradient. We focus on the class of smooth and strongly monotone games and study optimal no-regret learning therein. Leveraging self-concordant barrier functions, we first construct an online bandit convex optimization algorithm and show that it achieves the single-agent optimal regret of $\tilde{\Theta}(\sqrt{T})$ under smooth and strongly-concave payoff functions. We then show that if each agent applies this no-regret learning algorithm in strongly monotone games, the joint action converges in \textit{last iterate} to the unique Nash equilibrium at a rate of $\tilde{\Theta}(1/\sqrt{T})$. Prior to our work, the best-know convergence rate in the same class of games is $O(1/T^{1/3})$ (achieved by a different algorithm), thus leaving open the problem of optimal no-regret learning algorithms (since the known lower bound is $\Omega(1/\sqrt{T})$). Our results thus settle this open problem and contribute to the broad landscape of bandit game-theoretical learning by identifying the first doubly optimal bandit learning algorithm, in that it achieves (up to log factors) both optimal regret in the single-agent learning and optimal last-iterate convergence rate in the multi-agent learning. We also present results on several simulation studies -- Cournot competition, Kelly auctions, and distributed regularized logistic regression -- to demonstrate the efficacy of our algorithm.
翻译:我们用土匪反馈来考虑在未知的游戏中进行在线不回报学习,每个代理商在平滑和强劲的支付功能下,每次只观察其奖励 -- -- 由所有玩家当前联合动作决定 -- -- 而不是其梯度。我们侧重于平滑和强烈单调游戏的等级,并研究最佳的不回报游戏。我们利用自我调和障碍功能,我们首先构建一个在线土匪组合优化算法,并显示它实现了$(tilde)\theta}(sqrt{T}) 的单一代理商最佳遗憾。在平滑和强烈的支付功能下,每个代理商每次只观察其奖赏 -- -- 由所有玩家的当前联合动作。如果每个代理商在强烈的单调游戏中应用这种无回报式学习算法,那么在以$(tright_lut) 平滑化的游戏中,我们最优化的递归顺率率(We) 和最优化的递化的递归值(t_g_1/3} (通过不同的算法显示) 将我们最优化的学习的算算结果归来显示最佳的逻辑。