This paper investigates the convergence time of log-linear learning to an $\epsilon$-efficient Nash equilibrium (NE) in potential games. In such games, an efficient NE is defined as the maximizer of the potential function. Previous literature provides asymptotic convergence rates to efficient Nash equilibria, and existing finite-time rates are limited to potential games with further assumptions such as the interchangeability of players. In this paper, we prove the first finite-time convergence to an $\epsilon$-efficient NE in general potential games. Our bounds depend polynomially on $1/\epsilon$, an improvement over previous bounds that are exponential in $1/\epsilon$ and only hold for subclasses of potential games. We then strengthen our convergence result in two directions: first, we show that a variant of log-linear learning that requires a factor $A$ less feedback on the utility per round enjoys a similar convergence time; second, we demonstrate the robustness of our convergence guarantee if log-linear learning is subject to small perturbations such as alterations in the learning rule or noise-corrupted utilities.
翻译:暂无翻译