Zero-shot quantization is a promising approach for developing lightweight deep neural networks when data is inaccessible owing to various reasons, including cost and issues related to privacy. By utilizing the learned parameters (statistics) of FP32-pre-trained models, zero-shot quantization schemes focus on generating synthetic data by minimizing the distance between the learned parameters ($\mu$ and $\sigma$) and distributions of intermediate activations. Subsequently, they distill knowledge from the pre-trained model (\textit{teacher}) to the quantized model (\textit{student}) such that the quantized model can be optimized with the synthetic dataset. In general, zero-shot quantization comprises two major elements: synthesizing datasets and quantizing models. However, thus far, zero-shot quantization has primarily been discussed in the context of quantization-aware training methods, which require task-specific losses and long-term optimization as much as retraining. We thus introduce a post-training quantization scheme for zero-shot quantization that produces high-quality quantized networks within a few hours on even half an hour. Furthermore, we propose a framework called \genie~that generates data suited for post-training quantization. With the data synthesized by \genie, we can produce high-quality quantized models without real datasets, which is comparable to few-shot quantization. We also propose a post-training quantization algorithm to enhance the performance of quantized models. By combining them, we can bridge the gap between zero-shot and few-shot quantization while significantly improving the quantization performance compared to that of existing approaches. In other words, we can obtain a unique state-of-the-art zero-shot quantization approach.
翻译:在由于各种原因,包括成本和与隐私有关的问题无法获取数据的情况下,零点量化方案是开发轻量深神经网络的一个很有希望的方法。通过使用FP32预培训模型的学习参数(统计),零点量化方案侧重于生成合成数据,办法是将学习参数(美元和美元)和中间激活分布之间的距离最小化。随后,它们将预培训模型(textit{schair})至量化模型(textit{stud})的知识提取到量化模型(textit{stud}),这样,量化模型就可以与合成数据集的后期数据集优化。一般而言,零点量化方案由两个主要要素组成:将数据集和量化模型同步化。然而,迄今为止,零点量化主要在量化培训方法的背景下讨论,这需要特定任务的损失和长期优化方法,以获得再培训。因此,我们为零点量化量化模型引入了后二次量化量化方案,同时为高质量后半点量化框架提出一个可大幅升级的数据。