Normalization techniques have become a basic component in modern convolutional neural networks (ConvNets). In particular, many recent works demonstrate that promoting the orthogonality of the weights helps train deep models and improve robustness. For ConvNets, most existing methods are based on penalizing or normalizing weight matrices derived from concatenating or flattening the convolutional kernels. These methods often destroy or ignore the benign convolutional structure of the kernels; therefore, they are often expensive or impractical for deep ConvNets. In contrast, we introduce a simple and efficient "Convolutional Normalization" (ConvNorm) method that can fully exploit the convolutional structure in the Fourier domain and serve as a simple plug-and-play module to be conveniently incorporated into any ConvNets. Our method is inspired by recent work on preconditioning methods for convolutional sparse coding and can effectively promote each layer's channel-wise isometry. Furthermore, we show that our ConvNorm can reduce the layerwise spectral norm of the weight matrices and hence improve the Lipschitzness of the network, leading to easier training and improved robustness for deep ConvNets. Applied to classification under noise corruptions and generative adversarial network (GAN), we show that the ConvNorm improves the robustness of common ConvNets such as ResNet and the performance of GAN. We verify our findings via numerical experiments on CIFAR and ImageNet.
翻译:正规化技术已成为现代神经神经网络(ConvNets)的一个基本组成部分。特别是,许多最近的工作表明,促进重量的正正统性有助于培养深层模型和增强稳健性。对于ConvNets来说,大多数现有方法都基于从融合或平化进进进进内核中取出的重力矩阵的处罚或正常化。这些方法往往摧毁或忽视进化内核的良性神经网络结构;因此,对于深层ConvNets来说,这些方法往往昂贵或不切实际。相比之下,我们引入了一种简单而高效的“进化正常化”(ConvNorm)方法,这种方法可以充分利用Fourier域的进化结构,并作为一种简单的插接和游戏模块,方便地融入进进进进进进进进进进进进进进的神经网络。我们的方法来自最近关于进化稀薄的进化技术的先决条件方法的工作,能够有效地促进每个层的通道测量。此外,我们Connorm可以降低进化的分光谱标准(Connalal),从而改进了我们通用的硬度内存的内核化网络的内存,从而改进了内核化网络的精确化网络,改进了我们的内核化网络,改进了我们的内核化和内核化网络的升级化网络,改进了我们的内化网络,改进了我们的内核化网络的升级和内核化网络,改进了我们的内核化网络,改进了我们的内核化,改进了我们的内核化网络。