In this paper, we study the min-max optimization problems on Riemannian manifolds. We introduce a Riemannian Hamiltonian function, minimization of which serves as a proxy for solving the original min-max problems. Under the Riemannian Polyak--{\L}ojasiewicz (PL) condition on the Hamiltonian function, its minimizer corresponds to the desired min-max saddle point. We also provide cases where this condition is satisfied. To minimize the Hamiltonian function, we propose Riemannian Hamiltonian methods (RHM) and present their convergence analysis. We extend RHM to include a consensus regularization and to the stochastic setting. We illustrate the efficacy of the proposed RHM in applications such as subspace robust Wasserstein distance, robust training of neural networks, and generative adversarial networks.
翻译:在本文中,我们研究了里曼尼方块的微量成份优化问题。 我们引入了里曼尼安汉密尔顿功能, 最大限度地减少这种功能作为解决原有微量成份问题的替代物。 根据里曼尼安Polyak- {L}ojasiewicz(PL)条件, 其最小化作用相当于所希望的微量成份支撑点。 我们还提供了满足这一条件的案例。 为了最大限度地减少汉密尔顿功能, 我们提出里曼尼安汉密尔顿方法(RHM), 并提出其趋同分析。 我们扩展了RHM, 以包括共识的正规化和随机化环境。 我们展示了拟议中的RHM在亚空间强的瓦瑟斯坦距离、 神经网络的强力训练以及基因对抗网络等应用中的功效。