Deep learning has redefined the field of artificial intelligence (AI) thanks to the rise of artificial neural networks, which are architectures inspired by their neurological counterpart in the brain. Through the years, this dualism between AI and neuroscience has brought immense benefits to both fields, allowing neural networks to be used in dozens of applications. These networks use an efficient implementation of reverse differentiation, called backpropagation (BP). This algorithm, however, is often criticized for its biological implausibility (e.g., lack of local update rules for the parameters). Therefore, biologically plausible learning methods that rely on predictive coding (PC), a framework for describing information processing in the brain, are increasingly studied. Recent works prove that these methods can approximate BP up to a certain margin on multilayer perceptrons (MLPs), and asymptotically on any other complex model, and that zero-divergence inference learning (Z-IL), a variant of PC, is able to exactly implement BP on MLPs. However, the recent literature shows also that there is no biologically plausible method yet that can exactly replicate the weight update of BP on complex models. To fill this gap, in this paper, we generalize (PC and) Z-IL by directly defining them on computational graphs, and show that it can perform exact reverse differentiation. What results is the first biologically plausible algorithm that is equivalent to BP in the way of updating parameters on any neural network, providing a bridge between the interdisciplinary research of neuroscience and deep learning.
翻译:由于人工神经网络的兴起,人工神经网络(AI)重新定义了人工智能领域(AI),而人工神经网络的兴起是人造神经网络的兴起。多年来,人工神经科学与神经科学之间的这种二元主义为这两个领域带来了巨大的好处,使得神经网络可以用于数十种应用。这些网络使用高效的反向差异,称为反向偏移(BP ) 。然而,这种算法常常因其生物不可信而受到批评(例如,当地缺乏参数更新规则 ) 。因此,在生物学上,依赖预测的神经编码(PC)这一描述大脑信息处理的框架(PC)的深层次可信的学习方法正在日益受到研究。最近的工作证明,这些方法可以使BP在多层透视线(MLPs)上接近到一定的距离,而在其他复杂的模型上,零振动引力推导力学习(Z-IL),这是个人计算机的变异体,可以在MLPs上精确地应用 BP 。然而,最近的文献还显示,在描述大脑中不存在任何生物直观的对等值方法的对等值方法,而能够精确地复制BCLILLA的计算结果。