Quantizing deep neural networks is an effective method for reducing memory consumption and improving inference speed, and is thus useful for implementation in resource-constrained devices. However, it is still hard for extremely low-bit models to achieve accuracy comparable with that of full-precision models. To address this issue, we propose learnable companding quantization (LCQ) as a novel non-uniform quantization method for 2-, 3-, and 4-bit models. LCQ jointly optimizes model weights and learnable companding functions that can flexibly and non-uniformly control the quantization levels of weights and activations. We also present a new weight normalization technique that allows more stable training for quantization. Experimental results show that LCQ outperforms conventional state-of-the-art methods and narrows the gap between quantized and full-precision models for image classification and object detection tasks. Notably, the 2-bit ResNet-50 model on ImageNet achieves top-1 accuracy of 75.1% and reduces the gap to 1.7%, allowing LCQ to further exploit the potential of non-uniform quantization.
翻译:深度神经网络的量化是减少内存消耗和提高推断速度的有效方法,因此对资源限制装置的实施有用。然而,极低位模型仍很难实现与全精度模型的准确性可比的精确性。为解决这一问题,我们提议将深心神经网络量化(LCQ)作为2、3和4位模型的新颖的非单一量化方法。LCQ共同优化模型重量和可学习的兼容性功能,这些功能可以灵活和不统一地控制重量和激活的量化水平。我们还提出了新的重量正常化技术,以便能够对四分法进行更稳定的培训。实验结果表明,LCQ优于常规的艺术状态方法,缩小了图象分类和对象探测任务四分法和全精度模型之间的差距。值得注意的是,图像网络的2位ResNet-50模型实现了75.1%的顶级和不统一性精确度,并将差距缩小到1.7%,允许LCQQ进一步开发非磁化的潜力。