Spiking neural networks, also often referred to as the third generation of neural networks, carry the potential for a massive reduction in memory and energy consumption over traditional, second-generation neural networks. Inspired by the undisputed efficiency of the human brain, they introduce temporal and neuronal sparsity, which can be exploited by next-generation neuromorphic hardware. To open the pathway toward engineering applications, we introduce this exciting technology in the context of continuum mechanics. However, the nature of spiking neural networks poses a challenge for regression problems, which frequently arise in the modeling of engineering sciences. To overcome this problem, a framework for regression using spiking neural networks is proposed. In particular, a network topology for decoding binary spike trains to real numbers is introduced, utilizing the membrane potential of spiking neurons. As the aim of this contribution is a concise introduction to this new methodology, several different spiking neural architectures, ranging from simple spiking feed-forward to complex spiking long short-term memory neural networks, are derived. Several numerical experiments directed towards regression of linear and nonlinear, history-dependent material models are carried out. A direct comparison with counterparts of traditional neural networks shows that the proposed framework is much more efficient while retaining precision and generalizability. All code has been made publicly available in the interest of reproducibility and to promote continued enhancement in this new domain.
翻译:螺旋神经网络,也常常被称为第三代神经网络,具有大规模减少传统第二代神经网络记忆和能量消耗的潜力,具有大规模减少传统第二代神经网络记忆和能量消耗的潜力。在人类大脑无可争议的效率的启发下,它们引入了时间和神经紧张性,可以由下一代神经变形硬件加以利用。为了打开通往工程应用的路径,我们在连续力力学的背景下引入了这种令人兴奋的技术。然而,神经网络的涌现性质对回归问题提出了挑战,而这种问题经常出现在工程科学的模型中。为了克服这一问题,提出了使用神经网络的螺旋回流框架。特别是,引入了将二进级峰列解码变为真实数字的网络表层学,利用神经神经突变的膜潜力加以利用。由于这一贡献的目的是简明扼要地介绍这一新方法,因此出现了几种不同的神经结构,从简单的冲刷进到复杂的短期记忆神经网络模拟中经常出现的问题。 几次数字实验都针对线性和非线性神经网络的回归性,同时在公开的精确性网络中进行直接的对比,所有历史变现式模型都是持续的。