Min-max optimization problems arise in several key machine learning setups, including adversarial learning and generative modeling. In their general form, in absence of convexity/concavity assumptions, finding pure equilibria of the underlying two-player zero-sum game is computationally hard [Daskalakis et al., 2021]. In this work we focus instead in finding mixed equilibria, and consider the associated lifted problem in the space of probability measures. By adding entropic regularization, our main result establishes global convergence towards the global equilibrium by using simultaneous gradient ascent-descent with respect to the Wasserstein metric -- a dynamics that admits efficient particle discretization in high-dimensions, as opposed to entropic mirror descent. We complement this positive result with a related entropy-regularized loss which is not bilinear but still convex-concave in the Wasserstein geometry, and for which simultaneous dynamics do not converge yet timescale separation does. Taken together, these results showcase the benign geometry of bilinear games in the space of measures, enabling particle dynamics with global qualitative convergence guarantees.
翻译:在几个关键的机器学习设置中,包括对抗性学习和基因模型模型,出现了最小最大优化问题。在一般的形式上,在没有精确度/混凝土假设的情况下,发现基底两玩者零和游戏的纯平衡是计算上硬的[Daskalakis等人,2021]。在这项工作中,我们侧重于寻找混合平衡,并考虑在概率测量空间中相关的解脱问题。通过添加昆虫正规化,我们的主要结果通过在瓦瑟斯坦度量度中同时使用梯度升至月亮,从而实现全球平衡的趋同。在测量空间中,这种动态承认高二分位的粒子高效离散,而不是微镜下行。我们用相关的恒定值损失来补充这一积极结果,在瓦瑟斯坦地貌测量中,由于同时的动态尚未在时间尺度分离中趋同。这些结果一起展示了测量空间双线游戏的正确几何对称,使粒子动力与全球质量趋同。