Network quantization, which aims to reduce the bit-lengths of the network weights and activations, has emerged as one of the key ingredients to reduce the size of neural networks for their deployments to resource-limited devices. In order to overcome the nature of transforming continuous activations and weights to discrete ones, recent study called Relaxed Quantization (RQ) [Louizos et al. 2019] successfully employ the popular Gumbel-Softmax that allows this transformation with efficient gradient-based optimization. However, RQ with this Gumbel-Softmax relaxation still suffers from bias-variance trade-off depending on the temperature parameter of Gumbel-Softmax. To resolve the issue, we propose a novel method, Semi-Relaxed Quantization (SRQ) that uses multi-class straight-through estimator to effectively reduce the bias and variance, along with a new regularization technique, DropBits that replaces dropout regularization to randomly drop the bits instead of neurons to further reduce the bias of the multi-class straight-through estimator in SRQ. As a natural extension of DropBits, we further introduce the way of learning heterogeneous quantization levels to find proper bit-length for each layer using DropBits. We experimentally validate our method on various benchmark datasets and network architectures, and also support the quantized lottery ticket hypothesis: learning heterogeneous quantization levels outperforms the case using the same but fixed quantization levels from scratch.
翻译:网络量化旨在降低网络重量和激活量的比特长度,成为降低神经网络规模以将其部署到资源有限的设备的关键要素之一。为了克服将连续激活和重量转换为离散启动和重量的性质,最近开展了名为“放松量化(RQ)”[Louizos等人2019]的研究,成功地运用了流行的 Gumbel-Softmax 技术,通过高效的梯度优化实现这种转换。然而,使用这种 Gumbel-Softmax 放松的RQ 系统,仍然受到基于 Gumbel-Softmax 系统温度参数的偏差交易的影响。为了解决这一问题,我们提出了一种新型方法,即半放松量化(SRQ),使用多级直通度估算(RQQQ) 来有效减少偏差和差异,同时采用新的正规化技术,以取代递解正规化,但取代神经化,以进一步减少多级直通度的正比值交易的偏差值交易。我们使用每类直透度的直径直径选择的正正方程的平级计算, 也采用每个正平方平级的平方平方平方平级的平级结构 学习方法,在不断学习我们平级的平级的平级平级的平级的平级的平级的平级的平级的平级结构。