Biological spiking neural networks (SNNs) can temporally encode information in their outputs, e.g. in the rank order in which neurons fire, whereas artificial neural networks (ANNs) conventionally do not. As a result, models of SNNs for neuromorphic computing are regarded as potentially more rapid and efficient than ANNs when dealing with temporal input. On the other hand, ANNs are simpler to train, and usually achieve superior performance. Here we show that temporal coding such as rank coding (RC) inspired by SNNs can also be applied to conventional ANNs such as LSTMs, and leads to computational savings and speedups. In our RC for ANNs, we apply backpropagation through time using the standard real-valued activations, but only from a strategically early time step of each sequential input example, decided by a threshold-crossing event. Learning then incorporates naturally also _when_ to produce an output, without other changes to the model or the algorithm. Both the forward and the backward training pass can be significantly shortened by skipping the remaining input sequence after that first event. RC-training also significantly reduces time-to-insight during inference, with a minimal decrease in accuracy. The desired speed-accuracy trade-off is tunable by varying the threshold or a regularization parameter that rewards output entropy. We demonstrate these in two toy problems of sequence classification, and in a temporally-encoded MNIST dataset where our RC model achieves 99.19% accuracy after the first input time-step, outperforming the state of the art in temporal coding with SNNs, as well as in spoken-word classification of Google Speech Commands, outperforming non-RC-trained early inference with LSTMs.
翻译:生物突触神经神经网络(SNN)可以在输出中临时编码信息,例如神经神经起火,而人工神经网络(ANNS)通常不会。因此,神经变形计算SNN模型在处理时间输入时被认为可能比 NNNS模型更快、效率更高。另一方面, ANNS在培训上比较简单, 通常能取得优异的性能。 我们在这里显示, 由 SNNN(RC) 启发的排序编码(RC) 等时间编码也可以适用于LSTM 等常规ANNS, 并导致计算时间节约和加速。 在我们的 RC( ANNN) 中, 使用标准实际估值的激活, 只在处理时间顺序输入实例的早期阶段里, 才会被视为更快速。 然后学习自然地包含一个输出, 而不会对模型或算法作其他更改。 在第一个事件之后, 前方和后方的训练路程中, 将剩余输入序列的输入序列进行反向反向反向回调整。 在第一个事件后, RC- 递归算中, 以最短的货币交易中, 运行中, 将数据在最慢的递减到最慢的顺序中, 。