We consider solving nonlinear optimization problems with a stochastic objective and deterministic equality constraints. We assume for the objective that its evaluation, gradient, and Hessian are inaccessible, while one can compute their stochastic estimates by, for example, subsampling. We propose a stochastic algorithm based on sequential quadratic programming (SQP) that uses a differentiable exact augmented Lagrangian as the merit function. To motivate our algorithm design, we first revisit and simplify an old SQP method \citep{Lucidi1990Recursive} developed for solving deterministic problems, which serves as the skeleton of our stochastic algorithm. Based on the simplified deterministic algorithm, we then propose a non-adaptive SQP for dealing with stochastic objective, where the gradient and Hessian are replaced by stochastic estimates but the stepsizes are deterministic and prespecified. Finally, we incorporate a recent stochastic line search procedure \citep{Paquette2020Stochastic} into the non-adaptive stochastic SQP to adaptively select the random stepsizes, which leads to an adaptive stochastic SQP. The global "almost sure" convergence for both non-adaptive and adaptive SQP methods is established. Numerical experiments on nonlinear problems in CUTEst test set demonstrate the superiority of the adaptive algorithm.
翻译:我们考虑用随机目标和确定性平等限制来解决非线性优化问题。 我们假设,为了达到这个目的,其评估、 梯度和赫森是无法进入的, 而人们可以通过子取样等来计算其随机估计。 我们提出一个基于连续二次二次编程(SQP)的不适应性算法(SQP), 该算法使用一种不同精确的增强拉格朗吉亚作为优点函数。 为了激励我们的算法设计, 我们首先重新审视并简化了一种旧的SQP方法 \citep{Lucidi1990Recursive}, 这个方法是为了解决确定性问题而开发的, 并且作为我们测定性算算法的骨架。 根据简化的确定性算法, 我们然后提出一个非适应性 SQPP, 用于处理性目标, 梯度和 Hesian 被精确的估测值替代, 但阶梯度是确定性和预定的。 最后, 我们将最近一个用于解决确定性线搜索程序的搜索程序 {Patite20Stochastal 的线搜索程序, 它作为我们测性算算算算算算算算算算算算算算算算算法的 的不适应性 。 。 在不适应性调整性调整性调整性调整性方法中, 。