Neuromorphic computing using biologically inspired Spiking Neural Networks (SNNs) is a promising solution to meet Energy-Throughput (ET) efficiency needed for edge computing devices. Neuromorphic hardware architectures that emulate SNNs in analog/mixed-signal domains have been proposed to achieve order-of-magnitude higher energy efficiency than all-digital architectures, however at the expense of limited scalability, susceptibility to noise, complex verification, and poor flexibility. On the other hand, state-of-the-art digital neuromorphic architectures focus either on achieving high energy efficiency (Joules/synaptic operation (SOP)) or throughput efficiency (SOPs/second/area), resulting in poor ET efficiency. In this work, we present THOR, an all-digital neuromorphic processor with a novel memory hierarchy and neuron update architecture that addresses both energy consumption and throughput bottlenecks. We implemented THOR in 28nm FDSOI CMOS technology and our post-layout results demonstrate an ET efficiency of 7.29G $\text{TSOP}^2/\text{mm}^2\text{Js}$ at 0.9V, 400 MHz, which represents a 3X improvement over state-of-the-art digital neuromorphic processors.
翻译:利用生物启发的Spiking神经神经网络(SNNs)进行神经内晶计算,是满足边缘计算设备所需的能量侵蚀效率的一个很有希望的解决办法。在模拟/混合信号领域,提出了类似于SNN的模拟/混合信号领域类似SNN的神经内晶硬件结构,以达到高于所有数字结构的磁级的能源效率,但以有限的可伸缩性、易受噪音、复杂核查和灵活性差为代价。另一方面,最先进的数字神经形态结构侧重于实现高能效(Joules/合成操作(SOP))或吞吐效率(SOPs/II/地区),从而导致ET效率低下。在这项工作中,我们介绍了一个全数字神经形态处理器,一个处理能源消耗和吞吐瓶颈的新型记忆级和神经更新结构。我们在28nors FDSOI CMOS技术和我们的后期结果中展示了在7.29G $\ textTHSO2{{xmlational_xxxxxxxxmrma_xxxstal_stal_stal_stal_xnical_xnical_xma_xyrma_xxxxxxxxxxxxm_x_xxm_xm_xxxm_xm_xm_xxxxxxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx