Manufacturing-viable neuromorphic chips require novel computer architectures to achieve the massively parallel and efficient information processing the brain supports so effortlessly. Emerging event-based architectures are making this dream a reality. However, the large memory requirements for synaptic connectivity are a showstopper for the execution of modern convolutional neural networks (CNNs) on massively parallel, event-based (spiking) architectures. This work overcomes this roadblock by contributing a lightweight hardware scheme to compress the synaptic memory requirements by several thousand times, enabling the execution of complex CNNs on a single chip of small form factor. A silicon implementation in a 12-nm technology shows that the technique increases the system's implementation cost by only 2%, despite achieving a total memory-footprint reduction of up to 374x compared to the best previously published technique.
翻译:制造上可行的神经变形芯片需要新的计算机结构,以实现大脑所支持的大规模平行和高效的信息处理。 新出现的事件型结构正在将这一梦想变为现实。 然而,对合成连接的大量记忆要求是执行大规模平行事件( 突发事件) 结构的现代革命神经网络的展示。 这项工作克服了这一障碍,它促成了一个轻量级硬件计划,将合成记忆要求压缩数千次,使复杂的CNN能够用一个小质元素的芯片执行。 12纳米技术的硅化实施显示,尽管与以前出版的最佳技术相比,记忆-足迹总共减少了374倍,但该技术的实施成本仅增加了2%。