In this work, we describe a generic approach to show convergence with high probability for stochastic convex optimization. In previous works, either the convergence is only in expectation or the bound depends on the diameter of the domain. Instead, we show high probability convergence with bounds depending on the initial distance to the optimal solution as opposed to the domain diameter. The algorithms use step sizes analogous to the standard settings and are universal to Lipschitz functions, smooth functions, and their linear combinations.
翻译:在这项工作中,我们描述一种通用方法,以显示同质率,高概率优化蒸汽二次曲线。在以往的作品中,趋同要么只是预期的,要么约束取决于域的直径。相反,我们显示了与界值的高度概率趋同,取决于与最佳溶液的初始距离,而不是与域直径的初始距离。算法使用类似于标准设置的步数大小,并且通用到利普施茨函数、光滑函数及其线性组合。