Understanding how biological neural networks carry out learning using spike-based local plasticity mechanisms can lead to the development of powerful, energy-efficient, and adaptive neuromorphic processing systems. A large number of spike-based learning models have recently been proposed following different approaches. However, it is difficult to assess if and how they could be mapped onto neuromorphic hardware, and to compare their features and ease of implementation. To this end, in this survey, we provide a comprehensive overview of representative brain-inspired synaptic plasticity models and mixed-signal CMOS neuromorphic circuits within a unified framework. We review historical, bottom-up, and top-down approaches to modeling synaptic plasticity, and we identify computational primitives that can support low-latency and low-power hardware implementations of spike-based learning rules. We provide a common definition of a locality principle based on pre- and post-synaptic neuron information, which we propose as a fundamental requirement for physical implementations of synaptic plasticity. Based on this principle, we compare the properties of these models within the same framework, and describe the mixed-signal electronic circuits that implement their computing primitives, pointing out how these building blocks enable efficient on-chip and online learning in neuromorphic processing systems.
翻译:生物神经网络如何利用基于峰值的当地可塑性机制进行学习,了解生物神经网络如何利用基于峰值的当地可塑性机制进行学习,从而形成强大、节能和适应性神经形态处理系统。最近提出了大量基于峰值的学习模式,并采取了不同的做法。然而,很难评估它们是否以及如何被映射到神经形态硬件上,并比较其特点和执行的便利性。为此,我们在这次调查中全面概述了具有代表性的大脑启发性合成可塑性模型和混合信号性CMOS神经形态电路在统一框架内的开发。我们审查这些模型的历史、自下而上和自上而下地的模拟合成性能,我们找出可支持低延迟和低功率执行基于峰值的学习规则的计算原始体。我们根据合成神经神经元信息的前和后特性,提出了一个通用的所在地原则,作为物理实施合成可塑性可塑性可塑性可塑性可塑性塑料的基本要求。基于这一原则,我们比较这些模型在同一个框架内的、自下而上自下而下而下而下而下而下而下而下而下的方法,我们找出了这些模型的模型的特性结构的特性结构系统是如何在正正正式的系统上,并描述这些正式的系统是如何在这种系统上、直导地进行着式的系统。