A long-standing proposition is that by emulating the operation of the brain's neocortex, a spiking neural network (SNN) can achieve similar desirable features: flexible learning, speed, and efficiency. Temporal neural networks (TNNs) are SNNs that communicate and process information encoded as relative spike times (in contrast to spike rates). A TNN architecture is proposed, and, as a proof-of-concept, TNN operation is demonstrated within the larger context of online supervised classification. First, through unsupervised learning, a TNN partitions input patterns into clusters based on similarity. The TNN then passes a cluster identifier to a simple online supervised decoder which finishes the classification task. The TNN learning process adjusts synaptic weights by using only signals local to each synapse, and clustering behavior emerges globally. The system architecture is described at an abstraction level analogous to the gate and register transfer levels in conventional digital design. Besides features of the overall architecture, several TNN components are new to this work. Although not addressed directly, the overall research objective is a direct hardware implementation of TNNs. Consequently, all the architecture elements are simple, and processing is done at very low precision. Importantly, low precision leads to very fast learning times. Simulation results using the time-honored MNIST dataset demonstrate learning times at least an order of magnitude faster than other online approaches while providing similar error rates.
翻译:长期的主张是,通过模拟大脑新皮层的运行,神经神经网络(SNN)可以实现类似的理想特征:灵活的学习、速度和效率。时空神经网络(TNN)是SNN,通过仅使用局部信号对每个突触进行编码,并在全球范围内出现组合行为,来调节和处理信息。提出了TNN结构,作为概念的证明,TNN操作在在线监督分类的大范围内得到证明。首先,通过不受监督的学习,TNN将输入模式分割成基于相似性的集群。TNN然后将一个集成识别器传递到完成分类任务的简单在线监管解码器上。TNN学习进程通过仅使用当地信号对每个突触时间进行编码,对行为进行组合。系统结构描述的抽象程度类似于常规数字设计中的门端和登记册传输级别。除了总体结构的特征外,一些TNNN是新的。尽管没有直接处理,但总体研究目标在提供更低精确的精确度方面,但整个研究目标在使用最短的时间里程的学习时间里程中,因此要直接地显示一个最短的时间序列。