Spike-based neuromorphic hardware holds the promise to provide more energy efficient implementations of Deep Neural Networks (DNNs) than standard hardware such as GPUs. But this requires to understand how DNNs can be emulated in an event-based sparse firing regime, since otherwise the energy-advantage gets lost. In particular, DNNs that solve sequence processing tasks typically employ Long Short-Term Memory (LSTM) units that are hard to emulate with few spikes. We show that a facet of many biological neurons, slow after-hyperpolarizing (AHP) currents after each spike, provides an efficient solution. AHP-currents can easily be implemented in neuromorphic hardware that supports multi-compartment neuron models, such as Intel's Loihi chip. Filter approximation theory explains why AHP-neurons can emulate the function of LSTM units. This yields a highly energy-efficient approach to time series classification. Furthermore it provides the basis for implementing with very sparse firing an important class of large DNNs that extract relations between words and sentences in a text in order to answer questions about the text.
翻译:以斯派基为基础的神经形态硬件有望提供比GPU等标准硬件(DNN)更高效的深神经网络(DNNs)实施能源效率更高的功能。 但是,这需要了解如何在以事件为基础的零星射击制度中模仿DNNs, 因为否则能源优势就会丢失。 特别是, 解决序列处理任务的DNNs通常使用长期短期内存(LSTM)单位, 而这些单位很难被少量钉钉钉钉所效仿。 我们显示, 在许多生物神经元中, 缓慢的超速后超速化( AHP) 电流中, 提供了高效的解决方案。 HP- 流可以很容易地在支持多组合神经模型的神经形态硬件( 如Intel's Loihi 芯片 ) 中执行。 过滤近似理论可以解释为什么AHP- 中子可以模仿LSTM 单位的功能。 这为时间序列分类提供了一种高能效的方法。 此外, 它提供了一种非常稀少的发射重要的大型 DNNS(AH) 提供基础, 在文本中提取文字与句和句子之间的关系。