Learning to synthesize data has emerged as a promising direction in zero-shot quantization (ZSQ), which represents neural networks by low-bit integer without accessing any of the real data. In this paper, we observe an interesting phenomenon of intra-class heterogeneity in real data and show that existing methods fail to retain this property in their synthetic images, which causes a limited performance increase. To address this issue, we propose a novel zero-shot quantization method referred to as IntraQ. First, we propose a local object reinforcement that locates the target objects at different scales and positions of the synthetic images. Second, we introduce a marginal distance constraint to form class-related features distributed in a coarse area. Lastly, we devise a soft inception loss which injects a soft prior label to prevent the synthetic images from being overfitting to a fixed object. Our IntraQ is demonstrated to well retain the intra-class heterogeneity in the synthetic images and also observed to perform state-of-the-art. For example, compared to the advanced ZSQ, our IntraQ obtains 9.17\% increase of the top-1 accuracy on ImageNet when all layers of MobileNetV1 are quantized to 4-bit. Code is at https://github.com/viperit/InterQ.
翻译:综合数据学习已成为零射孔径化(ZSQ)的一个有希望的方向,它代表神经网络,以低位整数表示神经网络,而没有获得任何真实数据。在本文中,我们观察到了真实数据中阶级内部异质的有趣现象,并表明现有方法未能将这种属性保留在合成图像中,这导致性能增长有限。为解决这一问题,我们提议了一个称为IntraQ的新颖的零点四分化方法。首先,我们提议了一个本地物体增强装置,将目标对象定位在不同尺度和合成图像的位置。第二,我们对在粗略区域分布的与阶级有关的特征采用边际距离限制。最后,我们设计了一个软的初始损失,先给一个软标签,防止合成图像过度适应固定对象。我们的IntraQ证明在合成图像中保留了等级内部异性,并观察到了“状态-艺术”。例如,与先进的ZSQQ相比,我们的 IntraQQ获得一个边端距离限制,对在粗糙区域中散布的与等级/QQQreal-stal Qet在图像网络上将顶端/QrealIP1的精确度增加。当所有M1-Viet/Qb/QreqstalIPletxmetx时,这是所有4Ql/Qletx/Qetxmetxl/Qbxmetxmalmlmlmetx。