Spiking recurrent neural networks (RNNs) are a promising tool for solving a wide variety of complex cognitive and motor tasks, due to their rich temporal dynamics and sparse processing. However training spiking RNNs on dedicated neuromorphic hardware is still an open challenge. This is due mainly to the lack of local, hardware-friendly learning mechanisms that can solve the temporal credit assignment problem and ensure stable network dynamics, even when the weight resolution is limited. These challenges are further accentuated, if one resorts to using memristive devices for in-memory computing to resolve the von-Neumann bottleneck problem, at the expense of a substantial increase in variability in both the computation and the working memory of the spiking RNNs. To address these challenges and enable online learning in memristive neuromorphic RNNs, we present a simulation framework of differential-architecture crossbar arrays based on an accurate and comprehensive Phase-Change Memory (PCM) device model. We train a spiking RNN whose weights are emulated in the presented simulation framework, using a recently proposed e-prop learning rule. Although e-prop locally approximates the ideal synaptic updates, it is difficult to implement the updates on the memristive substrate due to substantial PCM non-idealities. We compare several widely adapted weight update schemes that primarily aim to cope with these device non-idealities and demonstrate that accumulating gradients can enable online and efficient training of spiking RNN on memristive substrates.
翻译:反复出现神经神经网络(RNNS)是解决多种复杂认知和运动任务的一个很有希望的工具,因为其时间动态丰富,处理过程稀少。然而,在专用神经变形硬件上对RNS进行培训,仍然是一个公开的挑战。这主要是由于缺乏本地的、硬件友好的学习机制,这种机制能够解决临时信用分配问题,确保稳定的网络动态,即使重量分辨率分辨率有限。如果使用内模计算设备进行模拟计算,以解决von-Neumann瓶颈问题,而花费大量增加计算和Sping RNS工作记忆的变异性,那么这些挑战就更加突出了。为了应对这些挑战,并能够在线学习中微型神经变异性神经变异性阵列的模拟框架,即使重量分辨率分辨率分辨率分辨率有限。我们用最近提议的电子变异性变异性变变变变变换模型来模拟RNNNNW, 并用最新的电子变更精度变精度对精度的精度对精度系统进行不精确的对精度更新。我们最近提出的精度对精度的精度对精度的精度再对精度对精度的精度进行精度的精度的精度校对精度的精度更新。