Optimization is often cast as a deterministic problem, where the solution is found through some iterative procedure such as gradient descent. However, when training neural networks the loss function changes over (iteration) time due to the randomized selection of a subset of the samples. This randomization turns the optimization problem into a stochastic one. We propose to consider the loss as a noisy observation with respect to some reference optimum. This interpretation of the loss allows us to adopt Kalman filtering as an optimizer, as its recursive formulation is designed to estimate unknown parameters from noisy measurements. Moreover, we show that the Kalman Filter dynamical model for the evolution of the unknown parameters can be used to capture the gradient dynamics of advanced methods such as Momentum and Adam. We call this stochastic optimization method KaFiStO. KaFiStO is an easy to implement, scalable, and efficient method to train neural networks. We show that it also yields parameter estimates that are on par with or better than existing optimization algorithms across several neural network architectures and machine learning tasks, such as computer vision and language modeling.
翻译:优化往往被作为一种确定性的问题, 这个问题的解决方案是通过一些迭代程序( 如梯度下坡) 找到的。 然而, 当培训神经网络时, 随机选择一部分样本时, 损失函数会随着时间( 时间) 发生变化。 这种随机化会将优化问题变成一个随机化的问题。 我们提议将损失视为对某种参考最优化的噪音观测。 对损失的这种解释使我们可以采用Kalman过滤器, 把它当作一种优化器, 因为它的循环配方旨在估计来自噪音测量的未知参数。 此外, 我们显示, Kalman 过滤器的未知参数演变动态模型可以用来捕捉诸如 Momentum 和 Adam 等先进方法的梯度动态。 我们称之为“ KaFiStO ” 。 我们称之为“ KAFISTO ” 。 KAFISTO 是一种易于执行、 可缩放和高效的神经网络培训方法。 我们显示, 它产生的参数估计与若干神经网络架构和机器学习任务( 如计算机视觉和语言建模) 的优化算法相同或更好。