We show that Optimistic Hedge -- a common variant of multiplicative-weights-updates with recency bias -- attains ${\rm poly}(\log T)$ regret in multi-player general-sum games. In particular, when every player of the game uses Optimistic Hedge to iteratively update her strategy in response to the history of play so far, then after $T$ rounds of interaction, each player experiences total regret that is ${\rm poly}(\log T)$. Our bound improves, exponentially, the $O({T}^{1/2})$ regret attainable by standard no-regret learners in games, the $O(T^{1/4})$ regret attainable by no-regret learners with recency bias (Syrgkanis et al., 2015), and the ${O}(T^{1/6})$ bound that was recently shown for Optimistic Hedge in the special case of two-player games (Chen & Pen, 2020). A corollary of our bound is that Optimistic Hedge converges to coarse correlated equilibrium in general games at a rate of $\tilde{O}\left(\frac 1T\right)$.
翻译:我们显示,最佳格子 -- -- 一种常见的多倍加权更新的常见变体,具有耐受性偏差 -- -- 在多玩家一般和游戏中,最佳格子 -- -- 获得$@rm poli}(\log T) $的遗憾。特别是,当游戏的每个玩家利用最佳格子,根据玩耍的历史,反复更新其策略,然后在四轮互动后,每个玩家都感到后悔,这都是$@rm plus}(log T)的常见变体。我们的底线在双玩游戏(Chen & Pen,2020年)的特殊案例中,大大改进了标准不留级学习者所能实现的$O({T>1/4}), 特别是当游戏的每个玩家都利用最佳格子,根据游戏的历史变化史,利用最佳格子,反复更新策略,更新其策略,然后将美元(Syrgkkkkkanis等人,2015年) 和最近显示的“乐观格子游戏”(Chen & Pen,2020年)。 我们的必然结果的必然结果是,“Ofricreclight\\\qregalgalgal glasgal view vial vical lagal vial view as as aslgalgal as as lagal lagal as as violgalgal as as violgalgalgalgal 。