Neural network quantization aims to transform high-precision weights and activations of a given neural network into low-precision weights/activations for reduced memory usage and computation, while preserving the performance of the original model. However, extreme quantization (1-bit weight/1-bit activations) of compactly-designed backbone architectures (e.g., MobileNets) often used for edge-device deployments results in severe performance degeneration. This paper proposes a novel Quantization-Aware Training (QAT) method that can effectively alleviate performance degeneration even with extreme quantization by focusing on the inter-weight dependencies, between the weights within each layer and across consecutive layers. To minimize the quantization impact of each weight on others, we perform an orthonormal transformation of the weights at each layer by training an input-dependent correlation matrix and importance vector, such that each weight is disentangled from the others. Then, we quantize the weights based on their importance to minimize the loss of the information from the original weights/activations. We further perform progressive layer-wise quantization from the bottom layer to the top, so that quantization at each layer reflects the quantized distributions of weights and activations at previous layers. We validate the effectiveness of our method on various benchmark datasets against strong neural quantization baselines, demonstrating that it alleviates the performance degeneration on ImageNet and successfully preserves the full-precision model performance on CIFAR-100 with compact backbone networks.
翻译:神经网络孔化的目的是将某一神经网络的高精度权重和激活转化为低精度权重/活化,以减少内存使用和计算,同时保持原始模型的性能。然而,为了尽量减少每个重量对他人的影响,我们通过培训一个依赖投入的关联矩阵和重要矢量来对每一层的重量进行一个或多态的变换,这样每层的重量就会与其它部分分解。然后,我们根据它们的重要性来量化权重,以便尽可能减少从最初的内脏/内存精度中得出的信息的损失。我们通过在每一层内和连续层内,在每一层内,在每一层内,通过培训一个依赖投入的关联矩阵和重要矢量,对每一层的重量进行一个或多的变异性变。我们根据它们的重要性来量化权重,以便尽可能减少从最初的内脏/内存精度全面性能网络的损耗损,我们在每一层内分层间,在每一层内压层的递减前一级,我们用前层压压压压压压的平整。