This paper deals with regularized Newton methods, a flexible class of unconstrained optimization algorithms that is competitive with line search and trust region methods and potentially combines attractive elements of both. The particular focus is on combining regularization with limited memory quasi-Newton methods by exploiting the special structure of limited memory algorithms. Global convergence of regularization methods is shown under mild assumptions and the details of regularized limited memory quasi-Newton updates are discussed including their compact representations. Numerical results using all large-scale test problems from the CUTEst collection indicate that our regularized version of L-BFGS is competitive with state-of-the-art line search and trust-region L-BFGS algorithms and previous attempts at combining L-BFGS with regularization, while potentially outperforming some of them, especially when nonmonotonicity is involved.
翻译:本文论述正规化的牛顿方法,这是一组灵活的不受限制的优化算法,具有线性搜索和信任区域方法的竞争力,并有可能将两者的吸引因素结合起来,特别侧重于利用有限的记忆算法的特殊结构,将正规化与有限的记忆准牛顿方法相结合;全球正规化方法的趋同以温和的假设为例,讨论正规化的有限记忆准牛顿更新的细节,包括它们的缩略语;利用CUTEst收藏的所有大规模测试问题得出的数值结果显示,我们的正规化L-BFGS版本具有竞争力,与最新水平的线搜索和信任区域L-BFGS算法和以往试图将L-BFGS与正规化相结合的尝试相结合,同时可能超过其中一些方法,特别是在涉及非流动时。