This paper demonstrates how to construct ensembles of spiking neural networks producing state-of-the-art results, achieving classification accuracies of 98.71%, 100.0%, and 99.09%, on the MNIST, NMNIST and DVS Gesture datasets respectively. Furthermore, this performance is achieved using simplified individual models, with ensembles containing less than 50% of the parameters of published reference models. We provide comprehensive exploration on the effect of spike train interpretation methods, and derive the theoretical methodology for combining model predictions such that performance improvements are guaranteed for spiking ensembles. For this, we formalize spiking neural networks as GLM predictors, identifying a suitable representation for their target domain. Further, we show how the diversity of our spiking ensembles can be measured using the Ambiguity Decomposition. The work demonstrates how ensembling can overcome the challenges of producing individual SNN models which can compete with traditional deep neural networks, and creates systems with fewer trainable parameters and smaller memory footprints, opening the door to low-power edge applications, e.g. implemented on neuromorphic hardware.
翻译:本文展示了如何在MNIST、NMNIST和DVS Getsture数据集中构建产生最新结果的神经神经网络群集,分别达到98.71%、100.0%和99.09%的分类精度。此外,这一性能是使用简化的单个模型实现的,其中集合的参考模型的参数不到所公布的参考模型参数的50%。我们提供了对钉钉列解释方法的影响的全面探索,并得出了将模型预测结合起来的理论方法,例如保证对弹射聚合进行性能改进。为此,我们将神经网络正式化为GLM预测器,为其目标域确定一个合适的代表器。此外,我们展示了如何利用安比图化分解剖来测量我们星团的多样性。工作展示了编组如何克服产生能够与传统的深神经网络竞争的单个SNNN模型的挑战,并创建了可训练参数较少和记忆足迹较少的系统,打开了向低能量边缘应用的神经系统。