Recent convolutional neural network (CNN) development continues to advance the state-of-the-art model accuracy for various applications. However, the enhanced accuracy comes at the cost of substantial memory bandwidth and storage requirements and demanding computational resources. Although in the past the quantization methods have effectively reduced the deployment cost for edge devices, it suffers from significant information loss when processing the biased activations of contemporary CNNs. In this paper, we hence introduce an adaptive high-performance quantization method to resolve the issue of biased activation by dynamically adjusting the scaling and shifting factors based on the task loss. Our proposed method has been extensively evaluated on image classification models (ResNet-18/34/50, MobileNet-V2, EfficientNet-B0) with ImageNet dataset, object detection model (YOLO-V4) with COCO dataset, and language models with PTB dataset. The results show that our 4-bit integer (INT4) quantization models achieve better accuracy than the state-of-the-art 4-bit models, and in some cases, even surpass the golden full-precision models. The final designs have been successfully deployed onto extremely resource-constrained edge devices for many practical applications.
翻译:近期的神经神经网络(CNN)开发继续推进了各种应用的最新先进模型的准确性,然而,提高准确性的代价是大量的内存带带宽和存储要求以及大量计算资源,尽管过去量化方法有效地降低了边缘装置的部署成本,但在处理当代CNN有偏向的激活时,却蒙受了重大信息损失。在本文中,我们引入了适应性高性能四分化方法,通过动态调整基于任务损失的缩放和变化因素来解决有偏向的激活问题。我们提出的方法已经对图像分类模型(ResNet-18/34/50、MiveNet-V2、高效Net-B0)进行了广泛评价,这些模型包括图像网络数据集、带有COCO数据集的物体探测模型(YOLO-V4)和带有PTB数据集的语言模型。结果显示,我们的四位整(INT4)四分化模型的准确性比状态四位模型更强,有些情况下,甚至超过了金色全精度模型。最终设计已成功部署到许多实际的资源边缘应用装置上。