We consider constrained optimization problems with a nonsmooth objective function in the form of mathematical expectation. The Sample Average Approximation (SAA) is used to estimate the objective function and variable sample size strategy is employed. The proposed algorithm combines an SAA subgradient with the spectral coefficient in order to provide a suitable direction which improves the performance of the first order method as shown by numerical results. The step sizes are chosen from the predefined interval and the almost sure convergence of the method is proved under the standard assumptions in stochastic environment. To enhance the performance of the proposed algorithm, we further specify the choice of the step size by introducing an Armijo-like procedure adapted to this framework. Considering the computational cost on machine learning problems, we conclude that the line search improves the performance significantly. Numerical experiments conducted on finite sums problems also reveal that the variable sample strategy outperforms the full sample approach.
翻译:我们考虑以数学预期的形式对非偏移目标函数进行限制的优化问题。样本平均接近(SAA)用于估计目标功能和可变抽样规模战略。拟议的算法将SAA子位和光谱系数结合起来,以便提供一个适当的方向,改善第一个顺序方法的性能,如数字结果所示。从预设的间隔中选择了步数大小,并且几乎可以肯定地证明该方法的趋同性在随机环境的标准假设中得到了证明。为了提高拟议算法的性能,我们通过引入一个适应这个框架的类似Armijo程序,进一步具体规定了步数大小的选择。考虑到机器学习问题的计算成本,我们得出结论,线搜索大大改善了业绩。对有限总问题进行的数值实验还表明,变量抽样战略优于全部抽样方法。