Learning in biological and artificial neural networks is often framed as a problem in which targeted error signals guide parameter updating for more optimal network behaviour. Backpropagation of error (BP) is an example of such an approach and has proven to be a highly successful application of stochastic gradient descent to deep neural networks. However, BP relies on the global transmission of gradient information and has therefore been criticised for its biological implausibility. We propose constrained parameter inference (COPI) as a new principle for learning. COPI allows for the estimation of network parameters under the constraints of decorrelated neural inputs and top-down perturbations of neural states. We show that COPI is not only more biologically plausible but also provides distinct advantages for fast learning, compared with the backpropagation algorithm.
翻译:生物和人工神经网络的学习往往被设计成一个问题,其中定向错误信号指导参数更新,以促成更理想的网络行为。错误的反向分析是这种方法的一个例子,并证明是将随机梯度下降高度成功地应用于深神经网络。但是,BP依靠的是全球传递梯度信息,因此因其生物不可信而受到批评。我们提议将受限参数推论作为新的学习原则。COPI允许在与神经系统相关的神经输入和神经系统状态自上而下扰动的限制下对网络参数进行估计。我们表明,COPI不仅在生物方面更加可信,而且为快速学习提供了与反向分析算法相比的独特优势。