We revisit random search for stochastic optimization, where only noisy function evaluations are available. We show that the method works under weaker smoothness assumptions than previously considered, and that stronger assumptions enable improved guarantees. In the finite-sum setting, we design a variance-reduced variant that leverages multiple samples to accelerate convergence. Our analysis relies on a simple translation invariance property, which provides a principled way to balance noise and reduce variance.
翻译:我们重新审视随机优化中的随机搜索方法,该方法仅能获取带噪声的函数评估值。研究表明,该方法在比先前研究更弱的平滑性假设下依然有效,而更强的假设条件能够带来更优的理论保证。在有限求和场景中,我们设计了一种方差缩减变体,通过利用多样本加速收敛过程。我们的分析基于简单的平移不变性原理,这为平衡噪声和降低方差提供了理论依据。