In this paper, we analyze the local convergence rate of optimistic mirror descent methods in stochastic variational inequalities, a class of optimization problems with important applications to learning theory and machine learning. Our analysis reveals an intricate relation between the algorithm's rate of convergence and the local geometry induced by the method's underlying Bregman function. We quantify this relation by means of the Legendre exponent, a notion that we introduce to measure the growth rate of the Bregman divergence relative to the ambient norm near a solution. We show that this exponent determines both the optimal step-size policy of the algorithm and the optimal rates attained, explaining in this way the differences observed for some popular Bregman functions (Euclidean projection, negative entropy, fractional power, etc.).
翻译:在本文中,我们分析了乐观的反射下降方法在随机变异性不平等方面的当地趋同率,这是一种优化问题,在学习理论和机器学习方面有着重要的应用。我们的分析揭示了算法的趋同率与该方法背后的布雷格曼功能所引致的当地几何之间的复杂关系。我们用图伦特的推文来量化这种关系,这个概念是我们用来衡量布雷格曼差异相对于接近解决方案的环境规范的增长率的。我们表明,这一推论既决定了算法的最佳分步法政策,也决定了所取得的最佳比率,从而解释了一些受欢迎的布雷格曼函数(Euclidean 投影、 负英特、 分数力等等)所观察到的差异。