Careful tuning of the learning rate, or even schedules thereof, can be crucial to effective neural net training. There has been much recent interest in gradient-based meta-optimization, where one tunes hyperparameters, or even learns an optimizer, in order to minimize the expected loss when the training procedure is unrolled. But because the training procedure must be unrolled thousands of times, the meta-objective must be defined with an orders-of-magnitude shorter time horizon than is typical for neural net training. We show that such short-horizon meta-objectives cause a serious bias towards small step sizes, an effect we term short-horizon bias. We introduce a toy problem, a noisy quadratic cost function, on which we analyze short-horizon bias by deriving and comparing the optimal schedules for short and long time horizons. We then run meta-optimization experiments (both offline and online) on standard benchmark datasets, showing that meta-optimization chooses too small a learning rate by multiple orders of magnitude, even when run with a moderately long time horizon (100 steps) typical of work in the area. We believe short-horizon bias is a fundamental problem that needs to be addressed if meta-optimization is to scale to practical neural net training regimes.
翻译:认真调整学习率甚至时间表,对于有效的神经网培训至关重要。最近人们非常关注基于梯度的超优化元目标,即一个超参数,甚至学习一个优化器,以便最大限度地减少培训程序启动后预期的损失。但是,由于培训程序必须取消数千次,元目标必须用比神经网培训典型的更短的时间跨度命令来界定。我们发现,这种短视偏差的元目标导致对小步尺寸的严重偏差,一种效果是短视偏差。我们引入了一个小问题,一个吵闹的二次成本函数,我们通过计算和比较短期和较长时间跨度的最佳时间表来分析短视偏差。我们随后在标准的基准数据集上进行元优化实验(离线和在线),显示元偏差的元偏差使学习率过小,其程度为多级,即便在进行中度的长时间跨度培训时,我们还是相信一个典型的中度的超时间跨度(100个步骤)到一个典型的超度区域。