This work considers a challenging Deep Neural Network(DNN) quantization task that seeks to train quantized DNNs without involving any full-precision operations. Most previous quantization approaches are not applicable to this task since they rely on full-precision gradients to update network weights. To fill this gap, in this work we advocate using Evolutionary Algorithms (EAs) to search for the optimal low-bits weights of DNNs. To efficiently solve the induced large-scale discrete problem, we propose a novel EA based on cooperative coevolution that repeatedly groups the network weights based on the confidence in their values and focuses on optimizing the ones with the least confidence. To the best of our knowledge, this is the first work that applies EAs to train quantized DNNs. Experiments show that our approach surpasses previous quantization approaches and can train a 4-bit ResNet-20 on the Cifar-10 dataset with the same test accuracy as its full-precision counterpart.
翻译:这项工作考虑了一项具有挑战性的深神经网络(DNN)量化任务,该任务寻求在不涉及任何全面精确操作的情况下培训量化的DNN(DNN) 。 以往的大多数量化方法都不适用于这项任务, 因为它们依赖完全精度梯度来更新网络重量。 为了填补这一空白, 我们在此工作中提倡使用进化算法(EAs)来搜索DNS的最佳低位重量。 为了有效解决引致的大型离散问题, 我们提议了一个基于合作共变的新型EA, 它将网络重量反复组合在对其值的信心基础上, 并侧重于以最小信心优化这些重量。 根据我们所知, 这是应用EAs来培训四分网的首次工作。 实验表明, 我们的方法超过了以前的量化方法, 并且可以在Cifar- 10数据集上以与其全精度对应方的测试精度一样的精确度来培训一个四位ResNet-20。