The capabilities of natural neural systems have inspired new generations of machine learning algorithms as well as neuromorphic very large-scale integrated (VLSI) circuits capable of fast, low-power information processing. However, it has been argued that most modern machine learning algorithms are not neurophysiologically plausible. In particular, the workhorse of modern deep learning, the backpropagation algorithm, has proven difficult to translate to neuromorphic hardware. In this study, we present a neuromorphic, spiking backpropagation algorithm based on synfire-gated dynamical information coordination and processing, implemented on Intel's Loihi neuromorphic research processor. We demonstrate a proof-of-principle three-layer circuit that learns to classify digits from the MNIST dataset. To our knowledge, this is the first work to show a Spiking Neural Network (SNN) implementation of the backpropagation algorithm that is fully on-chip, without a computer in the loop. It is competitive in accuracy with off-chip trained SNNs and achieves an energy-delay product suitable for edge computing. This implementation shows a path for using in-memory, massively parallel neuromorphic processors for low-power, low-latency implementation of modern deep learning applications.
翻译:自然神经系统的能力激励了新一代的机器学习算法以及能够快速、低功率信息处理的大规模神经形态集成集成(VLSI)电路。然而,有人认为,大多数现代机器学习算法在神经生理学上并不合理。特别是现代深层学习的工马,即回向回向转换算法,证明难以转换成神经形态硬件。在本研究中,我们提出了一个神经形态演算法,这种算法以同步式带火的动态信息协调和处理为基础,在Intel's Loihi神经形态研究处理器上实施。我们展示了一条三层原理的校准电路,它学会将数字从MNISIS数据集中分类。对于我们的知识来说,这是第一次展示Spik Nealal 网络(SNN) 完全在芯片上实施反向反向反向演算法,而没有在循环中安装计算机。它具有竞争力,它与经过离板训练的SNNNS公司具有准确性,并且实现一种能量淡产产品,适合于进行深层次的同步计算。这个执行路径,这是使用低位态的现代神经形态的系统进行。这个执行路径,用来进行大规模的低度的系统。