We consider unconstrained optimization problems with nonsmooth and convex objective function in the form of mathematical expectation. The proposed method approximates the objective function with a sample average function by using different sample size in each iteration. The sample size is chosen in an adaptive manner based on the Inexact Restoration. The method uses line search and assumes descent directions with respect to the current approximate function. We prove the almost sure convergence under the standard assumptions. The convergence rate is also considered and the worst-case complexity of $\mathcal{O} (\varepsilon^{-2})$ is proved. Numerical results for two types of problems, machine learning hinge loss and stochastic linear complementarity problems, show the efficiency of the proposed scheme.
翻译:我们以数学预期的形式考虑非吸附和顺流目标功能的未受限制的优化问题。 拟议的方法在每种迭代中使用不同的样本大小,将目标功能与样本平均函数相近。 样本大小是根据不精确恢复法以适应方式选择的。 方法使用线搜索和根据当前近似函数的下降方向。 我们证明在标准假设下几乎可以肯定的趋同。 也考虑了趋同率,并证明了美元的最坏情况的复杂性。 两种问题的数值结果,即机器学习损失和随机线性线性互补问题,显示了拟议方案的效率。