Recent breakthroughs in neuromorphic computing show that local forms of gradient descent learning are compatible with Spiking Neural Networks (SNNs) and synaptic plasticity. Although SNNs can be scalably implemented using neuromorphic VLSI, an architecture that can learn using gradient-descent in situ is still missing. In this paper, we propose a local, gradient-based, error-triggered learning algorithm with online ternary weight updates. The proposed algorithm enables online training of multi-layer SNNs with memristive neuromorphic hardware showing a small loss in the performance compared with the state of the art. We also propose a hardware architecture based on memristive crossbar arrays to perform the required vector-matrix multiplications. The necessary peripheral circuitry including pre-synaptic, post-synaptic and write circuits required for online training, have been designed in the sub-threshold regime for power saving with a standard 180 nm CMOS process.
翻译:神经形态计算的最新突破表明, 本地的梯度下降学习形式与Spiking神经网络( SNNs) 和合成可塑性相容。 虽然 SNNs可以使用神经形态VLSI 进行可变化的实施, 但是仍然缺少一个能够在当地学习梯度- 白度的架构。 在本文中, 我们提出一个本地的、 梯度的、 错误触发的学习算法, 包括在线永久重量更新。 提议的算法使得多层 SNNes 能够在线培训多层 SNes, 与记忆性神经形态硬件相比, 其性能与艺术状态相比略有损失 。 我们还提议了一个基于光化交叉条阵列的硬件结构, 以进行所需的矢量- 矩阵倍增。 包括前合成、 后合成 和 书写线路在内的必要外围路路线已经设计在亚临界系统下, 用于用标准 180 纳米 CMOS 进程节能。