High-probability guarantees in stochastic optimization are often obtained only under strong noise assumptions such as sub-Gaussian tails. We show that such guarantees can also be achieved under the weaker assumption of bounded variance by developing a stochastic proximal point method. This method combines a proximal subproblem solver, which inherently reduces variance, with a probability booster that amplifies per-iteration reliability into high-confidence results. The analysis demonstrates convergence with low sample complexity, without restrictive noise assumptions or reliance on mini-batching.
翻译:随机优化中的高概率保证通常仅在强噪声假设(如亚高斯尾部)下才能获得。本文表明,通过开发一种随机近端点方法,在较弱的方差有界假设下也能实现此类保证。该方法将能够固有降低方差的近端子问题求解器,与可将单次迭代可靠性放大为高置信度结果的概率增强器相结合。分析表明,该方法在无需严格噪声假设或依赖小批量处理的情况下,能以低样本复杂度实现收敛。