In this paper, we study normalization methods for neural networks from the perspective of elimination singularity. Elimination singularities correspond to the points on the training trajectory where neurons become consistently deactivated. They cause degenerate manifolds in the loss landscape which will slow down training and harm model performances. We show that channel-based normalizations (e.g. Layer Normalization and Group Normalization) are unable to guarantee a far distance from elimination singularities, in contrast with Batch Normalization which by design avoids models from getting too close to them. To address this issue, we propose BatchChannel Normalization (BCN), which uses batch knowledge to avoid the elimination singularities in the training of channel-normalized models. Unlike Batch Normalization, BCN is able to run in both large-batch and micro-batch training settings. The effectiveness of BCN is verified on many tasks, including image classification, object detection, instance segmentation, and semantic segmentation. The code is here: https://github.com/joe-siyuan-qiao/Batch-Channel-Normalization.
翻译:在本文中,我们从消除奇点的角度研究神经网络的正常化方法。 消除奇点与神经元持续停止作用的培训轨迹上的点数相对应。 它们会在损失场景中造成退化的元件,会减缓培训和伤害模型的性能。 我们表明,基于频道的正常化(例如,多层正常化和群体正常化)无法保证远离消除奇点,而批量正常化则通过设计避免模型过于接近这些特点。 为了解决这一问题,我们建议批次卫生化(BCN)使用批量知识避免在频道标准化模型培训中消除奇点。与批次标准化不同的是,BCN能够在大型批次和微型批次培训环境中运行。BCN的效力在很多任务上得到验证,包括图像分类、对象探测、实例分解和语分化。 代码在这里 : https://github.com/joe-siyu-qiao/Bhanch-Nel-Naralization。