We consider the problem of solving nonlinear optimization programs with stochastic objective and deterministic equality constraints. We assume for the objective that the function evaluation, the gradient, and the Hessian are inaccessible, while one can compute their stochastic estimates by, for example, subsampling. We propose a stochastic algorithm based on sequential quadratic programming (SQP) that uses a differentiable exact augmented Lagrangian as the merit function. To motivate our algorithm, we revisit an old SQP method \citep{Lucidi1990Recursive} developed for deterministic programs. We simplify that method and derive an adaptive SQP, which serves as the skeleton of our stochastic algorithm. Based on the derived algorithm, we then propose a non-adaptive SQP for optimizing stochastic objectives, where the gradient and the Hessian are replaced by stochastic estimates but the stepsize is deterministic and prespecified. Finally, we incorporate a recent stochastic line search procedure \citep{Paquette2020Stochastic} into our non-adaptive stochastic SQP to arrive at an adaptive stochastic SQP. To our knowledge, the proposed algorithm is the first stochastic SQP that allows a line search procedure and the first stochastic line search procedure that allows the constraints. The global convergence for all proposed SQP methods is established, while numerical experiments on nonlinear problems in the CUTEst test set demonstrate the superiority of the proposed algorithm.
翻译:我们考虑的是解决非线性优化程序的问题,其目标和确定性平等的限制是随机的。我们假设功能评估、梯度和黑森人无法进入,而人们可以通过子抽样来计算其随机估计。我们提出一个基于连续二次二次编程的随机算法(SQP),该算法使用不同精确增强的拉格朗吉人作为功绩函数。为了激励我们的算法,我们重新审视了为确定性程序开发的旧 SQP 方法 \ citep{Lucdi1990Recursive}。我们简化了该方法,并产生了一个适应性SQP,该方法作为我们随机算算算法的骨架。我们然后提出一个非适应性 SQP 算法,该算法将梯度和赫斯兰吉亚人替换为随机估计值,但阶梯系是确定性和预设的。最后,我们将最近一次的搜索线搜索性搜索程序纳入了我们的确定性直径直线非线搜索程序, 将SPlickrcal的Schochato 测试程序用于Scalstal Q 的Slupal 程序。Slupto 。Sluptostal testal testal 程序将Sleval 。Sluptotototototo 。Slupalalalalalalalalalto 将Slupaltaltotototo