We study stopping rules for stochastic gradient descent (SGD) for convex optimization from the perspective of anytime-valid confidence sequences. Classical analyses of SGD provide convergence guarantees in expectation or at a fixed horizon, but offer no statistically valid way to assess, at an arbitrary time, how close the current iterate is to the optimum. We develop an anytime-valid, data-dependent upper confidence sequence for the weighted average suboptimality of projected SGD, constructed via nonnegative supermartingales and requiring no smoothness or strong convexity. This confidence sequence yields a simple stopping rule that is provably $\varepsilon$-optimal with probability at least $1-α$ and is almost surely finite under standard stochastic approximation stepsizes. To the best of our knowledge, these are the first rigorous, time-uniform performance guarantees and finite-time $\varepsilon$-optimality certificates for projected SGD with general convex objectives, based solely on observable trajectory quantities.
翻译:本文从任意时间有效置信序列的视角,研究凸优化中随机梯度下降(SGD)的停止准则。经典的SGD分析提供了期望或固定迭代次数下的收敛性保证,但无法在任意时刻以统计有效的方式评估当前迭代点与最优解之间的接近程度。我们通过非负上鞅方法,构建了投影SGD加权平均次优性的任意时间有效、数据依赖型上置信序列,该构造无需光滑性或强凸性假设。该置信序列导出一个简单的停止准则,可在概率至少为$1-α$的条件下保证$ε$-最优性,并在标准随机逼近步长下几乎必然有限停止。据我们所知,这是首次针对一般凸目标函数的投影SGD,仅基于可观测的迭代轨迹量,给出严格的时间一致性能保证与有限时间$ε$-最优性验证。