We develop a line-search second-order algorithmic framework for minimizing finite sums. We do not make any convexity assumptions, but require the terms of the sum to be continuously differentiable and have Lipschitz-continuous gradients. The methods fitting into this framework combine line searches and suitably decaying step lengths. A key issue is a two-step sampling at each iteration, which allows us to control the error present in the line-search procedure. Stationarity of limit points is proved in the almost-sure sense, while almost-sure convergence of the sequence of approximations to the solution holds with the additional hypothesis that the functions are strongly convex. Numerical experiments, including comparisons with state-of-the art stochastic optimization methods, show the efficiency of our approach.
翻译:我们开发了一个用于最大限度地减少有限金额的一线搜索二阶算法框架。 我们不做任何细微的假设,但要求总和的条件可以持续地区别,并且有利普西茨连续的梯度。 适合这个框架的方法将线搜索和适当衰减的步长结合起来。 一个关键问题是在每次迭代中进行两步抽样,这使我们能够控制线搜索程序中存在的错误。 限制点的稳定性在几乎确定意义上得到了证明,而接近点与解决方案的顺序几乎可以保证地一致,而其他假设认为这些功能是很强的共性。 数字实验,包括与最新科技优化方法的比较,显示了我们方法的效率。