In this paper we study the effects of quantization in DNN training. We hypothesize that weight quantization is a form of regularization and the amount of regularization is correlated with the quantization level (precision). We confirm our hypothesis by providing analytical study and empirical results. By modeling weight quantization as a form of additive noise to weights, we explore how this noise propagates through the network at training time. We then show that the magnitude of this noise is correlated with the level of quantization. To confirm our analytical study, we performed an extensive list of experiments summarized in this paper in which we show that the regularization effects of quantization can be seen in various vision tasks and models, over various datasets. Based on our study, we propose that 8-bit quantization provides a reliable form of regularization in different vision tasks and models.
翻译:在本文中,我们研究了DNN培训中量化的影响。我们假设加权量化是一种正规化形式,而正规化的数量与定量化水平(精度)相关。我们通过提供分析研究和经验结果来证实我们的假设。我们通过模拟加权化作为重物添加噪音的一种形式,探索这种噪音在培训时如何通过网络传播。然后我们表明这种噪音的规模与定量化水平相关。为了证实我们的分析研究,我们进行了大量实验,其中我们展示了量化的正规化效应在各种视觉任务和模型中,在各种数据集中都可以看到。根据我们的研究,我们建议8位四分化在不同的视觉任务和模型中提供可靠的正规化形式。