While post-training quantization receives popularity mostly due to its evasion in accessing the original complete training dataset, its poor performance also stems from this limitation. To alleviate this limitation, in this paper, we leverage the synthetic data introduced by zero-shot quantization with calibration dataset and we propose a fine-grained data distribution alignment (FDDA) method to boost the performance of post-training quantization. The method is based on two important properties of batch normalization statistics (BNS) we observed in deep layers of the trained network, i.e., inter-class separation and intra-class incohesion. To preserve this fine-grained distribution information: 1) We calculate the per-class BNS of the calibration dataset as the BNS centers of each class and propose a BNS-centralized loss to force the synthetic data distributions of different classes to be close to their own centers. 2) We add Gaussian noise into the centers to imitate the incohesion and propose a BNS-distorted loss to force the synthetic data distribution of the same class to be close to the distorted centers. By introducing these two fine-grained losses, our method shows the state-of-the-art performance on ImageNet, especially when the first and last layers are quantized to low-bit as well. Our project is available at https://github.com/viperit/FDDA.
翻译:虽然培训后四分制主要因其在获取最初完整的培训数据集方面逃避,而其业绩不佳也源于这一限制。为了减少这一限制,我们在本文中利用以校准数据集引入的零点四分制成的合成数据,并提议采用微微量数据分配协调(FDDA)方法,以提高培训后四分制的绩效。该方法基于在经过培训的网络深层中我们观察到的批次正常化统计数据的两个重要属性,即阶级间分离和阶级内部混凝土。为了保存这一精细的分发信息:1)我们计算校准数据集作为每类BNS中心的每类BNS,并提出一种BNS集中化的数据分配方法,以迫使不同类的合成数据分配在离自己的中心很近的地方进行。(2)我们在中心增加高斯语噪音,以模仿混凝土,并提出BNS分解损失,迫使同一类的合成数据在靠近扭曲中心的地方分发。我们采用的最后两种微量的SDFDRFS-QS-stal-stal-stal-stal-stal-stion Stistrationaldal degravi a shment Areving the the frodustration-station-shment-shmental-stal-plistration agiltistration)