Spiking Neural Networks (SNNs) are an emerging domain of biologically inspired neural networks that have shown promise for low-power AI. A number of methods exist for building deep SNNs, with Artificial Neural Network (ANN)-to-SNN conversion being highly successful. MaxPooling layers in Convolutional Neural Networks (CNNs) are an integral component to downsample the intermediate feature maps and introduce translational invariance, but the absence of their hardware-friendly spiking equivalents limits such CNNs' conversion to deep SNNs. In this paper, we present two hardware-friendly methods to implement Max-Pooling in deep SNNs, thus facilitating easy conversion of CNNs with MaxPooling layers to SNNs. In a first, we also execute SNNs with spiking-MaxPooling layers on Intel's Loihi neuromorphic hardware (with MNIST, FMNIST, & CIFAR10 dataset); thus, showing the feasibility of our approach.
翻译:Spiking神经网络(Snoral Networks)是生物启发神经网络的一个新兴领域,对低功率AI来说已经显示出希望。 有一些方法可以用来建造深层神经网络,人工神经网络(ANN)到SNN的转换非常成功。 进化神经网络(CNN)中的Maxpooling层是下调中间地物图和引入翻译差异的一个有机组成部分,但是由于缺乏它们的硬件友好型喷射等同功能,限制CNN转换为深层SNNS。 在本文中,我们提出了两种硬件友好型方法,在深层SNNNNM中实施Max-pooling,从而方便了将有Maxpool层的CNNS转换为SNNS。 首先,我们还在Intel's Loi 神经形态硬件(与MNIST、FMNIST、和CIFAR10数据集一起)上执行SNNS(S)的SNNS和Spiking-Maxpooling-Max-Max-Maxpooling层),从而展示了我们的方法的可行性。