Predictive coding is an influential theory of cortical function which posits that the principal computation the brain performs, which underlies both perception and learning, is the minimization of prediction errors. While motivated by high-level notions of variational inference, detailed neurophysiological models of cortical microcircuits which can implements its computations have been developed. Moreover, under certain conditions, predictive coding has been shown to approximate the backpropagation of error algorithm, and thus provides a relatively biologically plausible credit-assignment mechanism for training deep networks. However, standard implementations of the algorithm still involve potentially neurally implausible features such as identical forward and backward weights, backward nonlinear derivatives, and 1-1 error unit connectivity. In this paper, we show that these features are not integral to the algorithm and can be removed either directly or through learning additional sets of parameters with Hebbian update rules without noticeable harm to learning performance. Our work thus relaxes current constraints on potential microcircuit designs and hopefully opens up new regions of the design-space for neuromorphic implementations of predictive coding.
翻译:预测性编码是一种具有影响力的皮质功能理论,它假定大脑进行的主要计算方法,即认知和学习的基础,是预测错误的最小化。虽然由高层次的变异推断概念驱动,但已经开发出详细的神经生理模型,用于进行其计算。此外,在某些条件下,预测性编码显示接近误差算法的反演算法,因此为深层网络的培训提供了一种相对生物学上可信的信用分配机制。然而,标准算法的实施仍然涉及潜在的无法令人信服的特征,如相同的前向和后向重量、落后的非线性衍生物和1-1错误单位连接。在本文中,我们表明这些特征不是算法的组成部分,而是直接或通过学习更多的参数来消除这些特征,而Hebbian更新规则不会对学习性能造成明显伤害。因此,我们的工作减轻了对潜在微电路设计的现有限制,并有望为预测编码的神经变形实施开辟新的设计空间。