While Spiking Neural Networks (SNNs) have been gaining in popularity, it seems that the algorithms used to train them are not powerful enough to solve the same tasks as those tackled by classical Artificial Neural Networks (ANNs). In this paper, we provide an extra tool to help us understand and train SNNs by using theory from the field of time encoding. Time encoding machines (TEMs) can be used to model integrate-and-fire neurons and have well-understood reconstruction properties. We will see how one can take inspiration from the field of TEMs to interpret the spike times of SNNs as constraints on the SNNs' weight matrices. More specifically, we study how to train one-layer SNNs by solving a set of linear constraints, and how to train two-layer SNNs by leveraging the all-or-none and asynchronous properties of the spikes emitted by SNNs. These properties of spikes result in an alternative to backpropagation which is not possible in the case of simultaneous and graded activations as in classical ANNs.
翻译:虽然Spiking神经网络越来越受欢迎,但用于培训这些网络的算法似乎不够强大,不足以解决与古典人工神经网络(ANNS)所处理的任务相同的任务。 在本文中,我们提供了额外工具,通过利用时间编码领域的理论帮助我们理解和培训SNN。时间编码机器(TEM)可以用来模拟整合和火灾神经元,并具有得到很好理解的重建特性。我们将看到如何从TEM领域汲取灵感,将SNNS的激增时间解释为对SNNS重量矩阵的限制。更具体地说,我们研究如何通过解决一系列线性限制来培训一个层次的SNNN,以及如何利用SNNS发射的加注的全或无线和无线特性来培训两层SNNN。这些加注的特性导致一种反向调整的替代方法,而这对于像古典ANIS一样同时和分级的激活是不可能的。