The use of min-max optimization in adversarial training of deep neural network classifiers and training of generative adversarial networks has motivated the study of nonconvex-nonconcave optimization objectives, which frequently arise in these applications. Unfortunately, recent results have established that even approximate first-order stationary points of such objectives are intractable, even under smoothness conditions, motivating the study of min-max objectives with additional structure. We introduce a new class of structured nonconvex-nonconcave min-max optimization problems, proposing a generalization of the extragradient algorithm which provably converges to a stationary point. The algorithm applies not only to Euclidean spaces, but also to general $\ell_p$-normed finite-dimensional real vector spaces. We also discuss its stability under stochastic oracles and provide bounds on its sample complexity. Our iteration complexity and sample complexity bounds either match or improve the best known bounds for the same or less general nonconvex-nonconcave settings, such as those that satisfy variational coherence or in which a weak solution to the associated variational inequality problem is assumed to exist.
翻译:在对深神经网络分类师的对抗性培训中使用微量最大优化以及基因对抗网络的培训,促使人们研究这些应用中经常出现的非混凝土优化目标。不幸的是,最近的结果证明,即使是在平滑的条件下,这类目标的大约一阶固定点也是棘手的,即使在平滑的条件下也是如此,激发对微量最大目标的研究,并增加结构。我们引入了新型结构化的非混凝土最小最大优化问题,建议对可移动到固定点的超高级算法进行概括化。这种算法不仅适用于尤克利底空间,而且还适用于一般的美元=美元=p$-colidean-norm-norited-al-en-en-en-en-en-en-en-en-en-en-en-en-en-en-en-en-en-en-en-en-en-en-en-en-en-en-en-en-en-en-en-en-en-en-en-en-en-en-en-en-en-en-en-en-en-en-en-en-en-en-en-en-en-en-en-en-en-en-en-en-en-en-en-en-en-en-en-en-en-en-en-en-en-en-en-en-en-en-en-en-en--en-en-en-en-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------