In this paper, we present a unified and general framework for analyzing the batch updating approach to nonlinear, high-dimensional optimization. The framework encompasses all the currently used batch updating approaches, and is applicable to nonconvex as well as convex functions. Moreover, the framework permits the use of noise-corrupted gradients, as well as first-order approximations to the gradient (sometimes referred to as "gradient-free" approaches). By viewing the analysis of the iterations as a problem in the convergence of stochastic processes, we are able to establish a very general theorem, which includes most known convergence results for zeroth-order and first-order methods. The analysis of "second-order" or momentum-based methods is not a part of this paper, and will be studied elsewhere. However, numerical experiments indicate that momentum-based methods can fail if the true gradient is replaced by its first-order approximation. This requires further theoretical analysis.
翻译:在本文中,我们提出了一个统一的总体框架,用于分析非线性、高维优化的批量更新方法。框架包含目前使用的所有批量更新方法,并适用于非convex和convex功能。此外,框架允许使用噪音腐蚀的梯度,以及梯度的第一阶近似值(有时被称为“梯度无”方法)。通过将迭代分析视为迭代过程趋同中的一个问题,我们能够建立一个非常笼统的理论,其中包括最著名的零级和一级方法的趋同结果。对“二级”或基于动力的方法的分析不是本文的一部分,将在其他地方加以研究。然而,数字实验表明,如果真正的梯度被第一级近似取而代之,基于动力的方法可能会失败。这需要进一步的理论分析。