Artificial neural networks are often interpreted as abstract models of biological neuronal networks, but they are typically trained using the biologically unrealistic backpropagation algorithm and its variants. Predictive coding has been offered as a potentially more biologically realistic alternative to backpropagation for training neural networks. In this manuscript, I review and extend recent work on the mathematical relationship between predictive coding and backpropagation for training feedforward artificial neural networks on supervised learning tasks. I discuss some implications of these results for the interpretation of predictive coding and deep neural networks as models of biological learning and I describe a repository of functions, Torch2PC, for performing predictive coding with PyTorch neural network models.
翻译:人工神经网络往往被解释为生物神经网络的抽象模型,但通常使用生物学上不切实际的反反向反向反向剖析算法及其变体来训练这些网络。预测性编码是培训神经网络的一种可能比反向剖析更符合生物学现实的替代方法。在这个手稿中,我审查并扩展了最近关于预测性编码与反向人工神经网络之间数学关系的工作,以用于在受监督的学习任务方面培训向外传人工神经网络。我讨论了这些结果对将预测性编码和深神经网络作为生物学习模型的解释的一些影响,我描述了与PyTorch神经网络模型进行预测编码的功能库,即Terch2PC。