Wide machine learning tasks can be formulated as non-convex multi-player games, where Nash equilibrium (NE) is an acceptable solution to all players, since no one can benefit from changing its strategy unilaterally. Attributed to the non-convexity, obtaining the existence condition of global NE is challenging, let alone designing theoretically guaranteed realization algorithms. This paper takes conjugate transformation to the formulation of non-convex multi-player games, and casts the complementary problem into a variational inequality (VI) problem with a continuous pseudo-gradient mapping. We then prove the existence condition of global NE: the solution to the VI problem satisfies a duality relation. Based on this VI formulation, we design a conjugate-based ordinary differential equation (ODE) to approach global NE, which is proved to have an exponential convergence rate. To make the dynamics more implementable, we further derive a discretized algorithm. We apply our algorithm to two typical scenarios: multi-player generalized monotone game and multi-player potential game. In the two settings, we prove that the step-size setting is required to be $\mathcal{O}(1/k)$ and $\mathcal{O}(1/\sqrt k)$ to yield the convergence rates of $\mathcal{O}(1/ k)$ and $\mathcal{O}(1/\sqrt k)$, respectively. Extensive experiments in robust neural network training and sensor localization are in full agreement with our theory.
翻译:宽的机器学习任务可以被设计成非convex 多重玩家游戏, Nash 平衡 (NE) 是所有玩家都能接受的解决方案, 因为没有人能够从单方面改变其策略中受益。 由于非conexity, 获得全球 NE 的存在条件具有挑战性, 更不用说设计理论上有保障的实现算法。 本文将转换成非convex 多重玩家游戏的配方, 并用连续的假渐变映像, 将互补的问题放到变异性不平等( VI) 问题中。 然后, 我们证明全球 NE 的存在条件: 解决 VI 的问题的方法符合双重性关系 。 基于此 6 配方, 我们设计了一个基于共和基的普通差异方程式( ODE), 事实证明它具有指数的趋同率。 为使动态更容易执行, 我们进一步推出一个离散的算法。 我们将我们的算法应用于两种典型的假设: 多玩家通用的单调游戏和多频程 。 在两种环境中, 我们证明, 6 问题 的 问题 满足了双级 双级 的 内值 内值 内 内 和 折价 折价 递增 美元 。 kqral 。