Convolutional neural networks are able to learn realistic image priors from numerous training samples in low-level image generation and restoration. We show that, for high-level image recognition tasks, we can further reconstruct "realistic" images of each category by leveraging intrinsic Batch Normalization (BN) statistics without any training data. Inspired by the popular VAE/GAN methods, we regard the zero-shot optimization process of synthetic images as generative modeling to match the distribution of BN statistics. The generated images serve as a calibration set for the following zero-shot network quantizations. Our method meets the needs for quantizing models based on sensitive information, \textit{e.g.,} due to privacy concerns, no data is available. Extensive experiments on benchmark datasets show that, with the help of generated data, our approach consistently outperforms existing data-free quantization methods.
翻译:进化神经网络能够从低层图像生成和恢复的众多培训样本中学习现实的图像前科。 我们显示,对于高层次图像识别任务,我们可以在没有任何培训数据的情况下,利用内部批次正常化(BN)统计数据,进一步重建每一类“现实”图像。受广受欢迎的VAE/GAN方法的启发,我们认为合成图像零点优化过程是匹配 BN 统计数据分布的基因化模型。生成的图像是以下零点网络量化的校准组。我们的方法满足了基于敏感信息(\ textit{e.e.)的量化模型的需要,由于隐私问题,没有数据可用。关于基准数据集的广泛实验显示,在生成数据的帮助下,我们的方法始终超越了现有的无数据二次量化方法。