We examine the long-run behavior of a wide range of dynamics for learning in nonatomic games, in both discrete and continuous time. The class of dynamics under consideration includes fictitious play and its regularized variants, the best-reply dynamics (again, possibly regularized), as well as the dynamics of dual averaging / "follow the regularized leader" (which themselves include as special cases the replicator dynamics and Friedman's projection dynamics). Our analysis concerns both the actual trajectory of play and its time-average, and we cover potential and monotone games, as well as games with an evolutionarily stable state (global or otherwise). We focus exclusively on games with finite action spaces; nonatomic games with continuous action spaces are treated in detail in Part II of this paper.
翻译:我们审视了非原子游戏中不同时间和连续时间广泛学习动态的长期行为。审议中的动态类别包括假游戏及其常规变体、最佳反向动态(同时,可能正规化)以及双平均/“遵循正规化领导者”的动态(这本身包括复制者动态和弗里德曼的预测动态等特殊案例 ) 。 我们的分析既涉及实际游戏轨迹,也涉及时间平均,我们涵盖了潜在和单调游戏,以及进化稳定状态(全球或其他)的游戏。 我们只关注有有限动作空间的游戏;本文第二部分详细讨论了具有连续动作空间的非原子游戏。