The stochastic gradient descent (SGD) optimizers are generally used to train the convolutional neural networks (CNNs). In recent years, several adaptive momentum based SGD optimizers have been introduced, such as Adam, diffGrad, Radam and AdaBelief. However, the existing SGD optimizers do not exploit the gradient norm of past iterations and lead to poor convergence and performance. In this paper, we propose a novel AdaNorm based SGD optimizers by correcting the norm of gradient in each iteration based on the adaptive training history of gradient norm. By doing so, the proposed optimizers are able to maintain high and representive gradient throughout the training and solves the low and atypical gradient problems. The proposed concept is generic and can be used with any existing SGD optimizer. We show the efficacy of the proposed AdaNorm with four state-of-the-art optimizers, including Adam, diffGrad, Radam and AdaBelief. We depict the performance improvement due to the proposed optimizers using three CNN models, including VGG16, ResNet18 and ResNet50, on three benchmark object recognition datasets, including CIFAR10, CIFAR100 and TinyImageNet. Code: \url{https://github.com/shivram1987/AdaNorm}.
翻译:光学梯度梯度下降优化(SGD)通常用于培训卷发神经网络(CNNs) 。近年来,引入了几个基于适应动力的SGD优化器,如Adam、diffGrad、Radam和Adabelief。然而,现有的SGD优化器没有利用过去迭代的梯度标准,没有导致差异和性能差。在本文中,我们提出一个新的AdaNorm基于SGD的优化器,根据梯度规范的适应性培训历史纠正每迭代中梯度标准。通过这样做,拟议的优化器能够在整个培训过程中保持高度和有代表性的梯度,解决低度和典型梯度问题。提议的概念是通用的,可以与任何现有的SGD优化器一起使用。我们展示了拟议的AdaNorm(Adam)、diffGrad、Radam和Adabellief。我们用三个CNNCM模型,包括VG16、Resubur/RAR18和Res50基准目标,包括VGARC10和CIS10。