Quantizing deep neural networks (DNNs) has been a promising solution for deploying deep neural networks on embedded devices. However, most of the existing methods do not quantize gradients, and the process of quantizing DNNs still has a lot of floating-point operations, which hinders the further applications of quantized DNNs. To solve this problem, we propose a new heuristic method based on cooperative coevolution for quantizing DNNs. Under the framework of cooperative coevolution, we use the estimation of distribution algorithm to search for the low-bits weights. Specifically, we first construct an initial quantized network from a pre-trained network instead of random initialization and then start searching from it by restricting the search space. So far, the problem is the largest discrete problem known to be solved by evolutionary algorithms. Experiments show that our method can train 4 bit ResNet-20 on the Cifar-10 dataset without sacrificing accuracy.
翻译:测算深神经网络(DNN)对于在嵌入装置上部署深神经网络(DNN)是一个很有希望的解决方案。 但是,大多数现有方法并不对梯度进行量化,而量化DNN的进程仍然有许多浮点操作,这阻碍了量化的DNN的进一步应用。为了解决这个问题,我们提议了一种新的超光速方法,其基础是合作共进,以量化DNN。在合作共进化的框架内,我们使用分配算法来搜索低比重。具体地说,我们首先从一个预先培训的网络中建立一个初始的量化网络,而不是随机初始化,然后通过限制搜索空间开始搜索。到目前为止,这个问题是已知的由进化算法解决的最大独有问题。实验表明,我们的方法可以在不牺牲准确性的情况下在Cifar-10数据集上培训4位ResNet-20。