While Moore's law has driven exponential computing power expectations, its nearing end calls for new avenues for improving the overall system performance. One of these avenues is the exploration of new alternative brain-inspired computing architectures that promise to achieve the flexibility and computational efficiency of biological neural processing systems. Within this context, neuromorphic intelligence represents a paradigm shift in computing based on the implementation of spiking neural network architectures tightly co-locating processing and memory. In this paper, we provide a comprehensive overview of the field, highlighting the different levels of granularity present in existing silicon implementations, comparing approaches that aim at replicating natural intelligence (bottom-up) versus those that aim at solving practical artificial intelligence applications (top-down), and assessing the benefits of the different circuit design styles used to achieve these goals. First, we present the analog, mixed-signal and digital circuit design styles, identifying the boundary between processing and memory through time multiplexing, in-memory computation and novel devices. Next, we highlight the key tradeoffs for each of the bottom-up and top-down approaches, survey their silicon implementations, and carry out detailed comparative analyses to extract design guidelines. Finally, we identify both necessary synergies and missing elements required to achieve a competitive advantage for neuromorphic edge computing over conventional machine-learning accelerators, and outline the key elements for a framework toward neuromorphic intelligence.
翻译:虽然摩尔的法律催生了指数计算能力预期,但其接近尾声要求有新的途径来改善整个系统绩效。其中一个途径是探索新的替代的由大脑启发的计算机结构,这些结构有望实现生物神经处理系统的灵活性和计算效率。在这方面,神经形态智能代表了基于实施神经神经网络结构的模拟、混合信号和数字电路设计风格的计算模式转变,确定了处理和记忆之间的界限,包括时间多重、模拟计算和新装置。接着,我们强调每个自下和自上而下方法之间的关键平衡,调查其神经结构应用(自下而下)和旨在解决实际人工智能应用(自上而下)和评估实现这些目标所用不同电路设计风格的效益的方法。首先,我们介绍模拟、混合信号和数字电路设计风格的风格,通过时间多重计算、模拟计算和新装置确定处理和记忆之间的界限。接下来,我们强调每个自下和自下而上而下的方法(自上而下)的不同颗粒度,将各种旨在复制自然智能(自下)的方法与那些旨在解决实际人工智能应用应用(自上下)的方法进行比较,并分析其用于最终的神经边缘分析,并进行必要的常规结构边缘分析。最后,我们为必要的常规学习所需的常规结构边缘分析,要获得必要的常规分析。