Learning in neural networks is often framed as a problem in which targeted error signals are directly propagated to parameters and used to produce updates that induce more optimal network behaviour. Backpropagation of error (BP) is an example of such an approach and has proven to be a highly successful application of stochastic gradient descent to deep neural networks. We propose constrained parameter inference (COPI) as a new principle for learning. The COPI approach assumes that learning can be set up in a manner where parameters infer their own values based upon observations of their local neuron activities. We find that this estimation of network parameters is possible under the constraints of decorrelated neural inputs and top-down perturbations of neural states for credit assignment. We show that the decorrelation required for COPI allows learning at extremely high learning rates, competitive with that of adaptive optimizers, as used by BP. We further demonstrate that COPI affords a new approach to feature analysis and network compression. Finally, we argue that COPI may shed new light on learning in biological networks given the evidence for decorrelation in the brain.
翻译:神经网络中的学习往往被设计成一个问题,在这一问题中,定向错误信号直接传播到参数中,并用于产生更新,从而促成更理想的网络行为。错误的反向分析(BP)是这种方法的一个范例,并被证明是将随机梯度下降高度成功地应用于深神经网络。我们提议将受限参数推导(COPI)作为一种新的学习原则提出来。COPI方法假定,可以以某种方式建立学习,使参数根据对本地神经活动的观察来推断自己的价值。我们发现,这种对网络参数的估计有可能在与装饰相关的神经输入和神经状态自上而下地扰动的情况下进行。我们表明,COPI所要求的设计允许以极高的学习率学习,与BP使用的适应优化器相比具有竞争力。我们进一步表明,COPI为特征分析和网络压缩提供了新的方法。最后,我们说,COPI可能会在生物网络中学习方面提供新的启发,因为有证据表明大脑中存在腐蚀性关系。