We study unconstrained optimization problems with nonsmooth and convex objective function in the form of a mathematical expectation. The proposed method approximates the expected objective function with a sample average function using Inexact Restoration-based adapted sample sizes. The sample size is chosen in an adaptive manner based on Inexact Restoration. The algorithm uses line search and assumes descent directions with respect to the current approximate function. We prove the a.s. convergence under standard assumptions. Numerical results for two types of problems, machine learning loss function for training classifiers and stochastic linear complementarity problems, prove the efficiency of the proposed scheme.
翻译:我们以数学预期的形式,研究非吸附和顺流目标功能的不受限制的优化问题。拟议方法使用不精确的恢复后调整的样本大小,将预期目标功能与样本平均功能相近。抽样规模是根据不精确的恢复后以适应方式选择的。算法使用线搜索,并按当前近似功能的下降方向进行。我们根据标准假设证明a.s.s.趋同。两类问题的数字结果,即培训分类人员的机器学习损失功能和随机线性线性互补问题,证明了拟议办法的效率。