We consider the problem of how to learn a step-size policy for the Limited-Memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) algorithm. This is a limited computational memory quasi-Newton method widely used for deterministic unconstrained optimization but currently avoided in large-scale problems for requiring step sizes to be provided at each iteration. Existing methodologies for the step size selection for L-BFGS use heuristic tuning of design parameters and massive re-evaluations of the objective function and gradient to find appropriate step-lengths. We propose a neural network architecture with local information of the current iterate as the input. The step-length policy is learned from data of similar optimization problems, avoids additional evaluations of the objective function, and guarantees that the output step remains inside a pre-defined interval. The corresponding training procedure is formulated as a stochastic optimization problem using the backpropagation through time algorithm. The performance of the proposed method is evaluated on the training of classifiers for the MNIST database for handwritten digits and for CIFAR-10. The results show that the proposed algorithm outperforms heuristically tuned optimizers such as ADAM, RMSprop, L-BFGS with a backtracking line search, and L-BFGS with a constant step size. The numerical results also show that a learned policy can be used as a warm-start to train new policies for different problems after a few additional training steps, highlighting its potential use in multiple large-scale optimization problems.
翻译:我们考虑了如何为有限记忆Broyden-Fletcher-Goldfarb-Shanno(L-BFGS)算法学习分级政策的问题。这是一种有限的计算内存准Newton方法,广泛用于确定性、不受限制的优化,但在每次迭代都需要提供分级大小时,大规模问题中目前避免了这种方法。L-BFGS的职级大小选择方法使用超常调整设计参数和对目标函数和梯度进行大规模重新评价,以找到适当的分级长度。我们建议建立一个神经网络结构,将当前循环的当地信息作为输入。从类似优化问题的数据中学习的分级内存准Newton方法,避免对目标功能进行更多的评价,保证产出步骤保持在预先确定的间隔内。相应的培训程序是作为一种分级化优化问题,通过时间算法,对目标函数和梯度进行超级重新评价。我们建议的方法的绩效是在MNIST数据库培训分级,用于手写数字,对当前循环的本地政策进行更多的分级,并且用IMFIS-10进行新的分级搜索。结果显示,在ASG的分级中,将AMA-S-S-S-S/A/A/A/A/ADRRRRRR/A/A/ADRDRRRRRR/A/A/A/A/ADR/A/ADRR/A/A/A/A/ADRRRRR/A/A/A/A/A/ADRRRRR/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A