Deep spiking neural networks (SNNs) offer the promise of low-power artificial intelligence. However, training deep SNNs from scratch or converting deep artificial neural networks to SNNs without loss of performance has been a challenge. Here we propose an exact mapping from a network with Rectified Linear Units (ReLUs) to an SNN that fires exactly one spike per neuron. For our constructive proof, we assume that an arbitrary multi-layer ReLU network with or without convolutional layers, batch normalization and max pooling layers was trained to high performance on some training set. Furthermore, we assume that we have access to a representative example of input data used during training and to the exact parameters (weights and biases) of the trained ReLU network. The mapping from deep ReLU networks to SNNs causes zero percent drop in accuracy on CIFAR10, CIFAR100 and the ImageNet-like data sets Places365 and PASS. More generally our work shows that an arbitrary deep ReLU network can be replaced by an energy-efficient single-spike neural network without any loss of performance.
翻译:深度喷射神经网络(SNNS)提供了低功率人工智能的希望。然而,从零开始培训深层的SNNS或将深层人工神经网络转换成无损性能的SNNS是一个挑战。在这里,我们建议从一个有校正线性单位(ReLUs)的网络精确地绘制成一个每神经元点点点点火的SNNNN。为了我们的建设性证据,我们假设一个具有或没有革命层、批量正常化和最大集合层的专制多层ReLU网络在某些培训中被训练为高性能。此外,我们假设我们能够获得培训期间使用的一个有代表性的输入数据实例,以及受过训练的RELU网络的确切参数(重量和偏差)。从深层RELU网络到SNNS的测绘导致CIFAR10、CIFAR100和图像网络类似数据集Place365和PASS的精确度零下降。更一般地说,我们的工作表明,一个任意的深层RELU网络可以被一个节能的单一神经网络取代,而不会丧失性能。