Spiking Neural Networks (SNN) have gained increasing attention for its low power consumption. But training SNN is challenging. Liquid State Machine (LSM), as a major type of Reservoir computing, has been widely recognized for its low training cost among SNNs. The exploration of LSM topology for enhancing performance often requires hyper-parameter search, which is both resource-expensive and time-consuming. We explore the influence of input scale reduction on LSM instead. There are two main reasons for studying input reduction of LSM. One is that the input dimension of large images requires efficient processing. Another one is that input exploration is generally more economic than architecture search. To mitigate the difficulty in effectively dealing with huge input spaces of LSM, and to find that whether input reduction can enhance LSM performance, we explore several input patterns, namely fullscale, scanline, chessboard, and patch. Several datasets have been used to evaluate the performance of the proposed input patterns, including two spatio image datasets and one spatio-temporal image database. The experimental results show that the reduced input under chessboard pattern improves the accuracy by up to 5%, and reduces execution time by up to 50% with up to 75\% less input storage than the fullscale input pattern for LSM.
翻译:Spik Spik Neal 网络( SNN) 因其低电能消耗量而得到越来越多的关注。 但是,培训 SNN 具有挑战性。 液态国家机器(LSM),作为主要的累进计算类型,因其在SNNS中培训成本低而得到广泛承认。 探索液态国家机器(LSM)的地形学提高性能往往需要超参数搜索,这既耗资资源又耗时。 我们探索了投入规模缩小对LSM的影响。 研究LSM投入减少有两个主要原因。 一个是大型图像的输入层面需要高效处理。 另一个是投入探索通常比建筑搜索更具经济性。为了减轻有效处理大量LSM输入空间的困难,并找出投入减少投入是否能够提高LSM的性能,我们探索了多种输入模式,即全面规模、扫描线、棋盘和补丁。 已经使用若干数据集来评价拟议输入模式的性能,包括两个磁盘图像数据集和一个时空图像数据库。 另一个是输入探索结果显示, 投入减少的输入比建筑结构搜索更经济,一般比搜索更经济。 要减少的难度,要降低到75的输入模式的精确度,直到5 %。