Network quantization is an effective compression method to reduce the model size and computational cost. Despite the high compression ratio, training a low-precision model is difficult due to the discrete and non-differentiable nature of quantization, resulting in considerable performance degradation. Recently, Sharpness-Aware Minimization (SAM) is proposed to improve the generalization performance of the models by simultaneously minimizing the loss value and the loss curvature. In this paper, we devise a Sharpness-Aware Quantization (SAQ) method to train quantized models, leading to better generalization performance. Moreover, since each layer contributes differently to the loss value and the loss sharpness of a network, we further devise an effective method that learns a configuration generator to automatically determine the bitwidth configurations of each layer, encouraging lower bits for flat regions and vice versa for sharp landscapes, while simultaneously promoting the flatness of minima to enable more aggressive quantization. Extensive experiments on CIFAR-100 and ImageNet show the superior performance of the proposed methods. For example, our quantized ResNet-18 with 55.1x Bit-Operation (BOP) reduction even outperforms the full-precision one by 0.7% in terms of the Top-1 accuracy. Code is available at https://github.com/zhuang-group/SAQ.
翻译:网络量化是减少模型规模和计算成本的一种有效的压缩方法。 尽管压缩率高,培训低精度模型还是困难的,因为量化的离散性和非差别性,导致性能显著退化。最近,提出了“锐化-软件最小化”(SAM),通过同时将损失值和损失曲线最小化来提高模型的通用性能。在本文件中,我们设计了一种精锐性-Aware 量化(SAQ) 方法来培训量化模型,从而导致更好的通用性能。此外,由于每个层对网络的损失值和损失锐度的贡献不同,因此很难对低精度模型进行培训。此外,我们进一步设计了一种有效的方法,学习一个配置生成器,自动确定每个层的位宽度配置,鼓励平坦地区使用较低的位子,而快速景观则相反,同时推广微型平坦度,以便能够进行更具侵略性的量化。 CIFAR-100和图像网络的广泛实验展示了拟议方法的优异性性性。例如,我们将 ResNet-18/18的精确度对网络精确度做出了不同的贡献,在55.1x ASUBSOM ASUBSBSBSBSOM 上将全部削减。