Spiking neural networks (SNNs) that mimic information transmission in the brain can energy-efficiently process spatio-temporal information through discrete and sparse spikes, thereby receiving considerable attention. To improve accuracy and energy efficiency of SNNs, most previous studies have focused solely on training methods, and the effect of architecture has rarely been studied. We investigate the design choices used in the previous studies in terms of the accuracy and number of spikes and figure out that they are not best-suited for SNNs. To further improve the accuracy and reduce the spikes generated by SNNs, we propose a spike-aware neural architecture search framework called AutoSNN. We define a search space consisting of architectures without undesirable design choices. To enable the spike-aware architecture search, we introduce a fitness that considers both the accuracy and number of spikes. AutoSNN successfully searches for SNN architectures that outperform hand-crafted SNNs in accuracy and energy efficiency. We thoroughly demonstrate the effectiveness of AutoSNN on various datasets including neuromorphic datasets.
翻译:模拟脑中信息传输的螺旋神经网络(SNN)能够通过离散和稀疏的峰值来节能地处理时空信息,从而引起相当的注意。为了提高SNN的准确性和能源效率,以前的大多数研究都仅仅侧重于培训方法,而建筑的影响也很少研究。我们从峰值的准确性和数量的角度对先前研究中所使用的设计选择进行了调查,并发现它们不适合SNNS。为了进一步提高精确性和减少SNS生成的峰值,我们提议了一个加敏神经结构搜索框架,称为AutoSNN。我们定义了一个由结构组成的搜索空间,没有不可取的设计选择。为了能够进行加敏结构搜索,我们引入一种适合性,既考虑到钉值的准确性和数量。AutSNNN成功地搜索了SNNS的建筑,这些结构在准确性和能源效率方面超过了手制的SNNNS。我们透彻地展示了AUSNN在包括神经形态数据集在内的各种数据集上的有效性。