In this work, we extend standard neural networks by building upon an assumption that neuronal activations correspond to the angle of a complex number lying on the unit circle, or 'phasor.' Each layer in such a network produces new activations by taking a weighted superposition of the previous layer's phases and calculating the new phase value. This generalized architecture allows models to reach high accuracy and carries the singular advantage that mathematically equivalent versions of the network can be executed with or without regard to a temporal variable. Importantly, the value of a phase angle in the temporal domain can be sparsely represented by a periodically repeating series of delta functions or 'spikes'. We demonstrate the atemporal training of a phasor network on standard deep learning tasks and show that these networks can then be executed in either the traditional atemporal domain or spiking temporal domain with no conversion step needed. This provides a novel basis for constructing deep networkswhich operate via temporal, spike-based calculations suitable for neuromorphic computing hardware.
翻译:在这项工作中,我们扩展标准神经网络,方法是假设神经激活与单位圆或“phasor”上复杂数字的角度相对应。 这样一个网络的每个层通过对前层阶段进行加权叠加并计算新阶段值而产生新的激活。 这个通用结构允许模型达到很高的精确度,并具有一个独有的优势,即网络的数学等同版本可以在不考虑时间变量的情况下执行。 重要的是,一个阶段角度在时间域中的值可以通过一系列定期重复的 delta 函数或“spikes” 来稀释。 我们展示了对一个标准深层次学习任务的长网络进行的初步培训, 并表明这些网络随后可以在传统的时空域或不需转换的时空域执行。 这为构建深层网络提供了新的基础, 这些网络可以通过适合神经形态计算硬件的时间、加注计算来运行。