Inspired by BatchNorm, there has been an explosion of normalization layers for deep neural networks (DNNs). However, these alternative normalization layers have seen minimal use, partially due to a lack of guiding principles that can help identify when these layers can serve as a replacement for BatchNorm. To address this problem, we take a theoretical approach, generalizing the known beneficial mechanisms of BatchNorm to several recently proposed normalization techniques. Our generalized theory leads to the following set of principles: (i) similar to BatchNorm, activations-based normalization layers can prevent exponential growth of activations in ResNets, but parametric layers require explicit remedies; (ii) use of GroupNorm can ensure informative forward propagation, with different samples being assigned dissimilar activations, but increasing group size results in increasingly indistinguishable activations for different samples, explaining slow convergence speed in models with LayerNorm; (iii) small group sizes result in large gradient norm in earlier layers, hence explaining training instability issues in Instance Normalization and illustrating a speed-stability tradeoff in GroupNorm. Overall, our analysis reveals a unified set of mechanisms that underpin the success of normalization methods in deep learning, providing us with a compass to systematically explore the vast design space of DNN normalization layers.
翻译:在BatchNorm的启发下,深神经网络(DNNNs)的正常化层发生了爆炸,但这些替代的正常化层的使用极少,部分原因是缺乏能够帮助确定这些层何时可以取代BatchNorm的指导原则。为了解决这一问题,我们采取了一种理论方法,将已知的BatchNorm的有益机制推广到最近提出的若干正常化技术。我们的普遍理论导致了一系列原则:(一) 类似于BatchNorm,基于启动的正常化层可以防止ResNet的激活迅速增长,但对应层需要明确的补救;(二) GroNorm的使用可以确保信息性的前瞻性传播,因为不同的样本被指定为不同版本的激活,但群体规模的扩大导致不同样本的可区别性激活,解释了与TelmNorm模型的缓慢趋同速度;(三) 小群体规模导致早期层次的大规模梯度规范,从而解释常规化中的训练不稳定问题,并表明在集团Norm系统中的快速贸易交易需要明确的补救;(二) GroNMs的使用可以确保信息传播,因为不同的样本被分配为深层的标准化提供了一套统一的空间研究标准。