Deep neural networks have been proven effective in a wide range of tasks. However, their high computational and memory costs make them impractical to deploy on resource-constrained devices. To address this issue, quantization schemes have been proposed to reduce the memory footprint and improve inference speed. While numerous quantization methods have been proposed, they lack systematic analysis for their effectiveness. To bridge this gap, we collect and improve existing quantization methods and propose a gold guideline for post-training quantization. We evaluate the effectiveness of our proposed method with two popular models, ResNet50 and MobileNetV2, on the ImageNet dataset. By following our guidelines, no accuracy degradation occurs even after directly quantizing the model to 8-bits without additional training. A quantization-aware training based on the guidelines can further improve the accuracy in lower-bits quantization. Moreover, we have integrated a multi-stage fine-tuning strategy that works harmoniously with existing pruning techniques to reduce costs even further. Remarkably, our results reveal that a quantized MobileNetV2 with 30\% sparsity actually surpasses the performance of the equivalent full-precision model, underscoring the effectiveness and resilience of our proposed scheme.
翻译:深心神经网络在一系列广泛的任务中证明是有效的。然而,它们的高计算和记忆成本使得它们无法在资源限制的装置上部署。为解决这一问题,提出了量化计划,以减少记忆足迹,提高推断速度。虽然提出了许多量化方法,但缺乏对其有效性的系统分析。为弥合这一差距,我们收集并改进了现有的量化方法,并为培训后量化提出了金制指南。我们用两个流行模型,即图像网络数据集的ResNet50和MovedNetV2, 评估了我们拟议方法的有效性。通过遵循我们的准则,即使在不经过额外培训,将模型直接量化为8位之后,也不会发生准确性退化。基于该指南的量化能力培训可以进一步提高低位四分法的准确性。此外,我们整合了一个多阶段的微调战略,与现有的调整技术相协调,以进一步降低成本。值得注意的是,我们的结果显示,一个具有30°C的移动网络2,实际上超过了我们拟议模型的有效性。</s>