Uniform-precision neural network quantization has gained popularity since it simplifies densely packed arithmetic unit for high computing capability. However, it ignores heterogeneous sensitivity to the impact of quantization errors across the layers, resulting in sub-optimal inference accuracy. This work proposes a novel neural architecture search called neural channel expansion that adjusts the network structure to alleviate accuracy degradation from ultra-low uniform-precision quantization. The proposed method selectively expands channels for the quantization sensitive layers while satisfying hardware constraints (e.g., FLOPs, PARAMs). Based on in-depth analysis and experiments, we demonstrate that the proposed method can adapt several popular networks channels to achieve superior 2-bit quantization accuracy on CIFAR10 and ImageNet. In particular, we achieve the best-to-date Top-1/Top-5 accuracy for 2-bit ResNet50 with smaller FLOPs and the parameter size.
翻译:均匀精度神经网络量化因其简化高计算能力的密集打包算术单元而备受欢迎。然而,它忽略了层间量化误差影响的异构敏感性,导致推理准确度不佳。本文提出了一种新颖的神经架构搜索方法,称为神经通道扩展,该方法调整网络结构以缓解超低均匀精度量化引起的准确性降低。所提出的方法选择性地为受量化敏感的层扩展通道,同时满足硬件约束(例如,FLOPs、PARAMs)。基于深入的分析和实验,我们证明了所提出的方法可以适应几种流行的网络通道,实现在CIFAR10和ImageNet上更优异的2位量化精度。特别地,我们实现了2-bit ResNet50的迄今最好的Top-1/Top-5准确率,而FLOPs和参数大小更小。