Backpropagation has rapidly become the workhorse credit assignment algorithm for modern deep learning methods. Recently, modified forms of predictive coding (PC), an algorithm with origins in computational neuroscience, have been shown to result in approximately or exactly equal parameter updates to those under backpropagation. Due to this connection, it has been suggested that PC can act as an alternative to backpropagation with desirable properties that may facilitate implementation in neuromorphic systems. Here, we explore these claims using the different contemporary PC variants proposed in the literature. We obtain time complexity bounds for these PC variants which we show are lower-bounded by backpropagation. We also present key properties of these variants that have implications for neurobiological plausibility and their interpretations, particularly from the perspective of standard PC as a variational Bayes algorithm for latent probabilistic models. Our findings shed new light on the connection between the two learning frameworks and suggest that, in its current forms, PC may have more limited potential as a direct replacement of backpropagation than previously envisioned.
翻译:反向传播已迅速成为现代深度学习方法的核心学分分配算法。最近,一些修改后的预测编码(PC)形式,一种源自计算神经科学的算法,已被证明会产生与反向传播中的参数更新大致或完全相等的结果。由于这种联系,人们认为PC可以作为反向传播的替代方法,具有可促进在神经形态学系统中实现的优良特性。在本文中,我们探讨了文献中提出的不同PC变体的这些说法。我们获得了这些PC变体的时间复杂度界限,并证明了它们的下限是反向传播。我们还提出了这些变种的关键属性,这些属性对于从标准PC作为潜在概率模型的变分贝叶斯算法的角度来看,具有神经生物学的合理性和解释性影响。我们的发现为两种学习框架之间的联系提供了新的视角,并表明,在其当前形式下,PC作为反向传播的直接替代可能比先前设想的有限。