We examine the problem of regret minimization when the learner is involved in a continuous game with other optimizing agents: in this case, if all players follow a no-regret algorithm, it is possible to achieve significantly lower regret relative to fully adversarial environments. We study this problem in the context of variationally stable games (a class of continuous games which includes all convex-concave and monotone games), and when the players only have access to noisy estimates of their individual payoff gradients. If the noise is additive, the game-theoretic and purely adversarial settings enjoy similar regret guarantees; however, if the noise is multiplicative, we show that the learners can, in fact, achieve constant regret. We achieve this faster rate via an optimistic gradient scheme with learning rate separation -- that is, the method's extrapolation and update steps are tuned to different schedules, depending on the noise profile. Subsequently, to eliminate the need for delicate hyperparameter tuning, we propose a fully adaptive method that attains nearly the same guarantees as its non-adapted counterpart, while operating without knowledge of either the game or of the noise profile.
翻译:我们研究了当学习者涉及到与其他优化代理人进行连续游戏时的遗憾最小化问题:在这种情况下,如果所有玩家都遵循无悔算法,则可以相对于完全对抗性环境实现显著降低的遗憾。我们在变分稳定游戏的上下文中研究了这个问题(一类包括所有凸凹和单调博弈的连续博弈),当玩家只能访问其个人收益梯度的嘈杂估计时。如果噪声是加性的,则博弈理论和完全对抗性环境具有类似的遗憾保证;然而,如果噪声是乘性的,则我们表明,学习者实际上可以实现恒定的遗憾。我们通过一个乐观的梯度方案以学习速率分离的方式实现更快的速率,即该方法的外推和更新步骤根据噪声配置文件的不同安排不同的时间表。随后,为了消除对微妙的超参数调整的需求,我们提出了一个完全自适应的方法,它实现了几乎与非自适应对应物相同的保证,同时在没有关于游戏或噪声配置文件的知识的情况下运行。