We present a new family of min-max optimization algorithms that automatically exploit the geometry of the gradient data observed at earlier iterations to perform more informative extra-gradient steps in later ones. Thanks to this adaptation mechanism, the proposed method automatically detects whether the problem is smooth or not, without requiring any prior tuning by the optimizer. As a result, the algorithm simultaneously achieves order-optimal convergence rates, i.e., it converges to an $\varepsilon$-optimal solution within $\mathcal{O}(1/\varepsilon)$ iterations in smooth problems, and within $\mathcal{O}(1/\varepsilon^2)$ iterations in non-smooth ones. Importantly, these guarantees do not require any of the standard boundedness or Lipschitz continuity conditions that are typically assumed in the literature; in particular, they apply even to problems with singularities (such as resource allocation problems and the like). This adaptation is achieved through the use of a geometric apparatus based on Finsler metrics and a suitably chosen mirror-prox template that allows us to derive sharp convergence rates for the methods at hand.
翻译:我们提出了一套新的微量最大优化算法,这些算法自动利用早期迭代中观察到的梯度数据的几何学度,在以后的迭代中执行更多信息化的异常步骤。由于这个适应机制,拟议方法自动检测问题是否顺利,而不需要优化者进行任何事先的调整。因此,该算法同时达到定序-最佳汇合率,即,在$\mathcal{O}(1/\varepsilon)中,它与美元平滑问题的迭代法相趋同,在美元\mathcal{O}(1/\varepsilon=2)中,在非偏差的迭代法中,它自动检测问题是否顺利。重要的是,这些保证并不要求文献中通常假定的任何标准约束性或利普施奇茨连续性条件;特别是,它甚至适用于奇特性(如资源分配问题和类似问题)的问题。这种调整是通过使用基于芬斯勒度度度度度和精确选择的镜面-prox模模模的几何仪器来,使我们得以在直射式的镜像-propralgrox模模模模模模上得出。