User data confidentiality protection is becoming a rising challenge in the present deep learning research. In that case, data-free quantization has emerged as a promising method to conduct model compression without the need for user data. With no access to data, model quantization naturally becomes less resilient and faces a higher risk of performance degradation. Prior works propose to distill fake images by matching the activation distribution given a specific pre-trained model. However, this fake data cannot be applied to other models easily and is optimized by an invariant objective, resulting in the lack of generalizability and diversity whereas these properties can be found in the natural image dataset. To address these problems, we propose Learning in School~(LIS) algorithm, capable to generate the images suitable for all models by inverting the knowledge in multiple teachers. We further introduce a decentralized training strategy by sampling teachers from hierarchical courses to simultaneously maintain the diversity of generated images. LIS data is highly diverse, not model-specific and only requires one-time synthesis to generalize multiple models and applications. Extensive experiments prove that LIS images resemble natural images with high quality and high fidelity. On data-free quantization, our LIS method significantly surpasses the existing model-specific methods. In particular, LIS data is effective in both post-training quantization and quantization-aware training on the ImageNet dataset and achieves up to 33\% top-1 accuracy uplift compared with existing methods.
翻译:在目前的深层学习研究中,用户数据保密保护正在成为一项日益严峻的挑战。在这种情况下,数据无量化已成为一种很有希望的方法,可以在不需要用户数据的情况下进行模型压缩,而不需要用户数据。在没有数据的情况下,模型量化自然会降低弹性,并面临性能退化的更大风险。先前的工作提议通过匹配激活分发,同时使用经过预先培训的具体模型来蒸馏假图像。然而,这种假数据无法轻易地应用于其他模型,并且通过一个变化不定的目标加以优化,导致缺乏通用性和多样性,而这些特性可以在自然图像数据集中找到。为了解决这些问题,我们提议用学校~(LIS)算法来学习,通过在多个教师中颠倒知识来生成适合所有模型的图像。我们进一步采用分散化的培训战略,通过对等级课程教师进行抽样,同时保持生成图像的多样性。LIS数据非常多样化,不针对具体模式,只要求一次性合成,以普及多种模型和应用。广泛的实验证明LIS图像类似于高质量的自然图像,并且具有高度精确性。在不公开性模型上比重的33个数据平整方法中,我们的现有LIS方法大大超越了标准。