We propose a new learning framework, signal propagation (sigprop), for propagating a learning signal and updating neural network parameters via a forward pass, as an alternative to backpropagation. In sigprop, there is only the forward path for inference and learning. So, there are no structural or computational constraints necessary for learning to take place, beyond the inference model itself, such as feedback connectivity, weight transport, or a backward pass, which exist under backpropagation based approaches. That is, sigprop enables global supervised learning with only a forward path. This is ideal for parallel training of layers or modules. In biology, this explains how neurons without feedback connections can still receive a global learning signal. In hardware, this provides an approach for global supervised learning without backward connectivity. Sigprop by construction has compatibility with models of learning in the brain and in hardware than backpropagation, including alternative approaches relaxing learning constraints. We also demonstrate that sigprop is more efficient in time and memory than they are. To further explain the behavior of sigprop, we provide evidence that sigprop provides useful learning signals in context to backpropagation. To further support relevance to biological and hardware learning, we use sigprop to train continuous time neural networks with Hebbian updates, and train spiking neural networks with only the voltage or with biologically and hardware compatible surrogate functions.
翻译:我们提议一个新的学习框架,即信号传播(sigprop),用于通过前方传球传播学习信号和通过前方传球更新神经网络参数,以此替代后方传球。在Sigprop中,只有前方传导和学习的前进道路。因此,除了在后方传导方法下存在的推论模型本身之外,除了反馈连接、重力运输或后方传球之外,没有结构或计算限制对于学习来说没有必要的结构或计算限制。也就是说,Sigprop仅能让全球受监督的学习有一个前方路径。这是平行培训层或模块的理想。在生物学中,这样可以解释没有反馈连接的神经仍然能够收到全球学习信号。在硬件中,这为全球受监督的学习提供了一种方法,而没有后向互连通。建筑与大脑和硬件的学习模式相容,包括减轻学习限制的替代方法。我们还证明,在时间和记忆中比以往更有效率。为了进一步解释正方的行为,我们提供证据,在结构或内向后方红网络提供有用的学习信号信号,我们不断学习。