In this short note, we propose a new method for quantizing the weights of a fully trained neural network. A simple deterministic pre-processing step allows us to quantize network layers via memoryless scalar quantization while preserving the network performance on given training data. On one hand, the computational complexity of this pre-processing slightly exceeds that of state-of-the-art algorithms in the literature. On the other hand, our approach does not require any hyper-parameter tuning and, in contrast to previous methods, allows a plain analysis. We provide rigorous theoretical guarantees in the case of quantizing single network layers and show that the relative error decays with the number of parameters in the network if the training data behaves well, e.g., if it is sampled from suitable random distributions. The developed method also readily allows the quantization of deep networks by consecutive application to single layers.
翻译:在这个简短的注释中,我们提出了一种新的方法来量化受过充分训练的神经网络的重量。一个简单的确定性前处理步骤使我们能够通过无内存的缩放量化网络层,同时保存特定培训数据的网络性能。一方面,这一预处理的计算复杂性略高于文献中最先进的算法。另一方面,我们的方法并不要求任何超参数的调整,与以往的方法不同,允许进行简单分析。我们在单一网络层的量化方面提供了严格的理论保证,并表明如果培训数据表现良好,例如,如果从适当的随机分布中取样,相对错误会随着网络参数的数量而衰减。发达的方法也很容易使深网络通过连续应用对单个层进行量化。