Convolutional neural networks have demonstrated impressive results in many computer vision tasks. However, the increasing size of these networks raises concerns about the information overload resulting from the large number of network parameters. In this paper, we propose Frequency Regularization to restrict the non-zero elements of the network parameters in frequency domain. The proposed approach operates at the tensor level, and can be applied to almost all network architectures. Specifically, the tensors of parameters are maintained in the frequency domain, where high frequency components can be eliminated by zigzag setting tensor elements to zero. Then, the inverse discrete cosine transform (IDCT) is used to reconstruct the spatial tensors for matrix operations during network training. Since high frequency components of images are known to be less critical, a large proportion of these parameters can be set to zero when networks are trained with the proposed frequency regularization. Comprehensive evaluations on various state-of-the-art network architectures, including LeNet, Alexnet, VGG, Resnet, ViT, UNet, GAN, and VAE, demonstrate the effectiveness of the proposed frequency regularization. Under the condition of a very small accuracy decrease (less than 2\%), a LeNet5 with 0.4M parameters can be represented by only 776 float16 numbers(over 1100$\times$), and a UNet with 34M parameters can be represented by only 759 float16 numbers (over 80000$\times$).
翻译:卷积神经网络在许多计算机视觉任务中展示了令人印象深刻的结果。然而,这些网络日益增长的大小引起了关于大量网络参数所导致的信息过载的担忧。在本文中,我们提出了一种“频率正则化”方法,以限制频域中的网络参数的非零元素。所提出的方法在张量级别上操作,几乎可以应用于所有网络架构。具体而言,参数的张量在频域中维护,其中可以通过将张量元素设置为零来消除高频分量。然后,采用反离散余弦变换(IDCT)对空间张量进行重构,在网络训练期间进行矩阵操作。由于图像的高频分量已知不太关键,因此当网络在提出的频率正则化措施下进行训练时,这些参数的很大比例可以被设置为零。对多种最先进的网络架构进行全面评估,包括LeNet,Alexnet,VGG,Resnet,ViT,UNet,GAN和VAE,证明了所提出的频率正则化的有效性。在非常小的准确率降低(不到2%)的条件下,具有0.4M参数的LeNet5仅可以由776个float16数量表示(超过1100倍),而具有34M参数的UNet仅可以由759个float16数量表示(超过80000倍)。