Spiking neural networks (SNNs) have been recently brought to light due to their promising capabilities. SNNs simulate the brain with higher biological plausibility compared to previous generations of neural networks. Learning with fewer samples and consuming less power are among the key features of these networks. However, the theoretical advantages of SNNs have not been seen in practice due to the slowness of simulation tools and the impracticality of the proposed network structures. In this work, we implement a high-performance library named Spyker using C++/CUDA from scratch that outperforms its predecessor. Several SNNs are implemented in this work with different learning rules (spike-timing-dependent plasticity and reinforcement learning) using Spyker that achieve significantly better runtimes, to prove the practicality of the library in the simulation of large-scale networks. To our knowledge, no such tools have been developed to simulate large-scale spiking neural networks with high performance using a modular structure. Furthermore, a comparison of the represented stimuli extracted from Spyker to recorded electrophysiology data is performed to demonstrate the applicability of SNNs in describing the underlying neural mechanisms of the brain functions. The aim of this library is to take a significant step toward uncovering the true potential of the brain computations using SNNs.
翻译:最近,由于模拟工具的缓慢和拟议网络结构不切实际,人们注意到Spik神经网络(SNNS)的理论优势。在这项工作中,我们建立了一个高性能的图书馆,名为Spyker, 使用C++/CUDA, 以高于其前身的手头完成。在这项工作中,一些SNNS采用不同的学习规则(观摩依赖的塑料和强化学习),使用Spyker, 取得更好的运行时间,证明图书馆在模拟大型网络中的实用性。据我们所知,没有开发这种工具来模拟大规模神经网络,使用模块结构模拟高性能。此外,将Spyker提取的表象素与录制的电子物理数据进行比较。一些Spyker采用不同的学习规则(Spik-iming-依赖性能和增强性能学习),使用Spyker, 来证明Spyker在模拟大型网络时的实用性。据我们所知,没有开发这种工具来模拟大型的神经网络,使用模块结构模拟高性能。此外,将Spyker提取的显示Spyker的电物理数据与记录数据的代表性比。正在展示SNNDFDFDFDM 的大脑的深层机能功能,正在向这个目的进行。