Tail averaging improves on Polyak averaging's non-asymptotic behaviour by excluding a number of leading iterates of stochastic optimization from its calculations. In practice, with a finite number of optimization steps and a learning rate that cannot be annealed to zero, tail averaging can get much closer to a local minimum point of the training loss than either the individual iterates or the Polyak average. However, the number of leading iterates to ignore is an important hyperparameter, and starting averaging too early or too late leads to inefficient use of resources or suboptimal solutions. Setting this hyperparameter to improve generalization is even more difficult, especially in the presence of other hyperparameters and overfitting. Furthermore, before averaging starts, the loss is only weakly informative of the final performance, which makes early stopping unreliable. To alleviate these problems, we propose an anytime variant of tail averaging, that has no hyperparameters and approximates the optimal tail at all optimization steps. Our algorithm is based on two running averages with adaptive lengths bounded in terms of the optimal tail length, one of which achieves approximate optimality with some regularity. Requiring only the additional storage for two sets of weights and periodic evaluation of the loss, the proposed two-tailed averaging algorithm is a practical and widely applicable method for improving stochastic optimization.
翻译:Polyak 平均的不禁止行为平均改善。 在实际操作中,如果有一定数量的优化步骤和学习率不能达到零,那么,平均的尾部会比单个中继者或Polyak 平均差差差差远接近培训损失的当地最低点。然而,要忽略的主要迭代者的数量是一个重要的超分计,开始的偏差太早或太晚导致资源或次优解决方案的使用效率低下。设置这一超分计来改进一般化甚至更加困难,特别是在其他超分计和超配的情况下。此外,在平均开始之前,最后性能的损失只能低得多地说明,从而使得早期停止不可靠。为了缓解这些问题,我们建议随时采用尾部平均偏差的变量,这种变量在所有优化步骤中都没有超分数和近似最佳尾巴。我们的算法以两种连续平均值为基础,在最佳尾部长度方面有适应性的长度,其中一种是达到可广泛适用的最佳程度,一种是定期优化的存储方法。