The adaptive changes in synaptic efficacy that occur between spiking neurons have been demonstrated to play a critical role in learning for biological neural networks. Despite this source of inspiration, many learning focused applications using Spiking Neural Networks (SNNs) retain static synaptic connections, preventing additional learning after the initial training period. Here, we introduce a framework for simultaneously learning the underlying fixed-weights and the rules governing the dynamics of synaptic plasticity and neuromodulated synaptic plasticity in SNNs through gradient descent. We further demonstrate the capabilities of this framework on a series of challenging benchmarks, learning the parameters of several plasticity rules including BCM, Oja's, and their respective set of neuromodulatory variants. The experimental results display that SNNs augmented with differentiable plasticity are sufficient for solving a set of challenging temporal learning tasks that a traditional SNN fails to solve, even in the presence of significant noise. These networks are also shown to be capable of producing locomotion on a high-dimensional robotic learning task, where near-minimal degradation in performance is observed in the presence of novel conditions not seen during the initial training period.
翻译:突袭神经元之间的适应性效果变化已证明在生物神经网络的学习中发挥着关键作用。尽管有这一灵感来源,但许多利用Spiking神经网络(SNN)学习的重点应用中保留了静态合成连接,防止在初始培训期后进行更多的学习。在这里,我们引入了一个框架,用于同时学习SNNS中基本的固定重量和通过梯度下降调节其合成可塑性和神经调节性可塑性动态的规则。我们进一步展示了这一框架在一系列具有挑战性的基准方面的能力,学习了包括BCM、Oja's等多个可塑性规则及其各自的神经调节变异体在内的多个可塑性规则的参数。实验结果显示,SNNMs以不同可塑性增强,足以解决传统的SNN无法解决的一系列具有挑战性的时间学习任务,即使存在重大噪音。这些网络还证明能够在高度机器人学习任务上产生传感,在初始培训期间没有看到的新条件下观察到的近小的性退化。