Adam and AdaBelief compute and make use of elementwise adaptive stepsizes in training deep neural networks (DNNs) by tracking the exponential moving average (EMA) of the squared-gradient g_t^2 and the squared prediction error (m_t-g_t)^2, respectively, where m_t is the first momentum at iteration t and can be viewed as a prediction of g_t. In this work, we investigate if layerwise gradient statistics can be expoited in Adam and AdaBelief to allow for more effective training of DNNs. We address the above research question in two steps. Firstly, we slightly modify Adam and AdaBelief by introducing layerwise adaptive stepsizes in their update procedures via either pre- or post-processing. Our empirical results indicate that the slight modification produces comparable performance for training VGG and ResNet models over CIFAR10 and CIFAR100, suggesting that layer-wise gradient statistics play an important role towards the success of Adam and AdaBelief for at least certian DNN tasks. In the second step, we propose Aida, a new optimisation method, with the objective that the elementwise stepsizes within each layer have significantly smaller statistical variances, and the layerwise average stepsizes are much more compact across all the layers. Motivated by the fact that (m_t-g_t)^2 in AdaBelief is conservative in comparison to g_t^2 in Adam in terms of layerwise statistical averages and variances, Aida is designed by tracking a more conservative function of m_t and g_t than (m_t-g_t)^2 via layerwise vector projections. Experimental results show that Aida produces either competitive or better performance with respect to a number of existing methods including Adam and AdaBelief for a set of challenging DNN tasks. Code is available <a href="https://github.com/guoqiang-x-zhang/AidaOptimizer">at this URL</a>
翻译:Adam and AdaBelief 编译并使用元素性适应步骤来培训深神经网络(DNNS) 。 我们用两个步骤来解决上述研究问题。 首先, 我们略微修改 Adam and AdaBelief, 通过预处理或后处理在更新程序中引入分层性调整步骤(m_t-g_t)%2, 其中 m_t 是迭代t的第一个动力, 可以被视为对 g_t的预测。 在此工作中, 我们调查在 Adam 和 AdaBelielief 中, 分层性梯度统计数据能否在 DNNN2 中进行更有效的培训。 在第二步中, 我们略微修改 Adam 和 AdaBelieflief 的 平均 平均 度 。 在 平面性能中, 我们提议在 平面性能中, 将一个小的 平面性化 方法显示 VGGG 和 ResNet 模型在 CIR10 和 CIFAR100 上的可比较性业绩。 。 通过 度的分层性梯度性统计统计统计统计统计统计数据在至少一个小的阶层中可以显示一个小的 方法。