In this paper, we consider first-order convergence theory and algorithms for solving a class of non-convex non-concave min-max saddle-point problems, whose objective function is weakly convex in the variables of minimization and weakly concave in the variables of maximization. It has many important applications in machine learning including training Generative Adversarial Nets (GANs). We propose an algorithmic framework motivated by the inexact proximal point method, where the weakly monotone variational inequality (VI) corresponding to the original min-max problem is solved through approximately solving a sequence of strongly monotone VIs constructed by adding a strongly monotone mapping to the original gradient mapping. We prove first-order convergence to a nearly stationary solution of the original min-max problem of the generic algorithmic framework and establish different rates by employing different algorithms for solving each strongly monotone VI. Experiments verify the convergence theory and also demonstrate the effectiveness of the proposed methods on training GANs.
翻译:在本文中,我们考虑了解决一类非convex非concive min-max马鞍点问题的第一阶趋同理论和算法,其客观功能是最小化变数中的微软共振和最大化变数中的微弱共振;在机器学习中有许多重要的应用,包括培训General Adversarial Nets(GANs)等培训;我们提议了一个由不精确的准点方法驱动的算法框架,其中与最初的微积分问题相对应的微单体变异性不平等(VI)通过在原有的梯度绘图中增加强烈的单质绘图,大约解决了构建的强烈单质六六的序列;我们证明,第一阶趋同几乎固定的通用算法框架原有的微积分问题,并通过使用不同的算法解决每个强的单体六。实验验证了趋同理论,并展示了拟议方法在培训GANs上的有效性。