In this work, we introduce two algorithmic frameworks, named Bregman extragradient method and Bregman extrapolation method, for solving saddle point problems. The proposed frameworks not only include the well-known extragradient and optimistic gradient methods as special cases, but also generate new variants such as sparse extragradient and extrapolation methods. With the help of the recent concept of relative Lipschitzness and some Bregman distance related tools, we are able to show certain upper bounds in terms of Bregman distances for ``regret" measures. Further, we use those bounds to deduce the convergence rate of $\cO(1/k)$ for the Bregman extragradient and Bregman extrapolation methods applied to solving smooth convex-concave saddle point problems. Our theory recovers the main discovery made in [Mokhtari et al. (2020), SIAM J. Optim., 20, pp. 3230-3251] for more general algorithmic frameworks with weaker assumptions via a conceptually different approach.
翻译:在这项工作中,我们引入了两个算法框架,称为布雷格曼外加法和布雷格曼外加法,以解决马鞍问题。提议的框架不仅包括作为特例的众所周知的异常和乐观梯度方法,而且还产生新的变体,如稀疏的外加法和外加法方法。由于最近的相对利普施特尼概念和一些与布雷格曼距离有关的工具,我们得以用布雷格曼距离来表示某些“雷格曼外加法”措施的上限。此外,我们利用这些界限来推断布雷格曼外加法和布雷格曼外加法方法的美元(1/k)的趋同率,这些方法用于解决光滑的 convex-congave table sock polegy pokets问题。我们的理论恢复了在[莫赫塔里等人等人(202020年),SIAM J.Optim., 20, pp.3230-3251]中的主要发现,通过概念上不同的方法,较弱的假设。