Feedback-driven recurrent spiking neural networks (RSNNs) are powerful computational models that can mimic dynamical systems. However, the presence of a feedback loop from the readout to the recurrent layer de-stabilizes the learning mechanism and prevents it from converging. Here, we propose a supervised training procedure for RSNNs, where a second network is introduced only during the training, to provide hint for the target dynamics. The proposed training procedure consists of generating targets for both recurrent and readout layers (i.e., for a full RSNN system). It uses the recursive least square-based First-Order and Reduced Control Error (FORCE) algorithm to fit the activity of each layer to its target. The proposed full-FORCE training procedure reduces the amount of modifications needed to keep the error between the output and target close to zero. These modifications control the feedback loop, which causes the training to converge. We demonstrate the improved performance and noise robustness of the proposed full-FORCE training procedure to model 8 dynamical systems using RSNNs with leaky integrate and fire (LIF) neurons and rate coding. For energy-efficient hardware implementation, an alternative time-to-first-spike (TTFS) coding is implemented for the full- FORCE training procedure. Compared to rate coding, full-FORCE with TTFS coding generates fewer spikes and facilitates faster convergence to the target dynamics.
翻译:由反馈驱动的反复涌动神经网络(RSNN)是强大的计算模型,可以模仿动态系统。然而,从读出到经常层的回馈环环会使学习机制不稳,防止其趋同。在这里,我们提议为RSN提供监管的培训程序,在培训期间才引入第二个网络,为目标动态提供提示。拟议培训程序包括为经常性和读出层(即,完整的RSNNN系统)设定目标。它使用重现的最小平方基础的一阶和减少控制错误(FORCE)算法,使每个层的活动与目标相适应。拟议的整个FOR培训程序减少了使产出和目标之间误差接近于零所需的修改量。这些修改控制了反馈循环,使培训趋于一致。我们展示了拟议全功能化培训程序的业绩和噪声稳健,以模型8动态系统为模式,使用有漏和火灾替代品的一阶流和降低神经系统(LIFCE)和降低节率的趋同速度,用于整个节能性规则的NEFE-S-CEFAS全面执行,一个升级的同步联合计算。