To rapidly process temporal information at a low metabolic cost, biological neurons integrate inputs as an analog sum but communicate with spikes, binary events in time. Analog neuromorphic hardware uses the same principles to emulate spiking neural networks with exceptional energy-efficiency. Nevertheless, emulating high-performing spiking networks on such hardware remains a significant challenge due to device-mismatch and the lack of efficient training algorithms. Here, we introduce a general in-the-loop learning framework that resolves these issues. Using the BrainScales-2 neuromorphic system, we show that learning self-corrects for device mismatch resulting in competitive spiking network performance on vision and speech benchmarks. Our networks display sparse spiking activity with, on average, far less than one spike per hidden neuron, perform inference at rates of up to 85 k frames/second, and consume less than 200 mW. In summary, our work sets several new benchmarks for low-energy spiking network processing on analog neuromorphic substrates and constitutes an important step toward on-chip learning.
翻译:为了以低代谢成本快速处理时间信息,生物神经元将输入作为模拟和模拟,但与峰值、双轨事件及时进行交流。模拟神经形态硬件使用同样的原则来模仿超高能效神经网络。然而,模拟这类硬件上高性能的喷射网络,由于装置相配和缺乏高效的培训算法,仍是一个重大挑战。在这里,我们引入了一个解决上述问题的一般的在线学习框架。我们使用脑结构-2神经形态系统,显示在视觉和语音基准上通过竞争性的网络性能跳动功能,对设备错配进行自我校正。我们的网络显示微小的跳动活动,平均而言远低于每个隐性神经网络一次,以85k框架/秒的速率进行推断,消耗不到200千瓦。简而言之,我们的工作为模拟神经形态子基质的低能跳动网络处理设定了几个新的基准,并构成向芯学习迈出的重要一步。