Training with backpropagation (BP) in standard deep learning consists of two main steps: a forward pass that maps a data point to its prediction, and a backward pass that propagates the error of this prediction back through the network. This process is highly effective when the goal is to minimize a specific objective function. However, it does not allow training on networks with cyclic or backward connections. This is an obstacle to reaching brain-like capabilities, as the highly complex heterarchical structure of the neural connections in the neocortex are potentially fundamental for its effectiveness. In this paper, we show how predictive coding (PC), a theory of information processing in the cortex, can be used to perform inference and learning on arbitrary graph topologies. We experimentally show how this formulation, called PC graphs, can be used to flexibly perform different tasks with the same network by simply stimulating specific neurons, and investigate how the topology of the graph influences the final performance. We conclude by comparing against simple baselines trained~with~BP.
翻译:在标准的深层学习中进行反向分析培训(BP)包括两个主要步骤:一个前传,该前传将数据映射到它的预测中,另一个后传,通过网络传播这一预测的错误。当目标是尽量减少特定目标功能时,这一过程非常有效。然而,它不允许对具有循环或后向连接的网络进行培训。这是达到像大脑的能力的障碍,因为新皮层神经连接高度复杂的异端结构对其有效性可能至关重要。在本文中,我们展示了如何使用预测编码(PC),即皮层信息处理理论,来对任意的图表表层进行推断和学习。我们实验性地展示了如何使用这种称为PC图的配方,仅通过刺激特定的神经元来灵活地执行不同的任务,并调查图形的表层学如何影响最后的性能。我们通过比较与IPBS相比的简单基线而得出结论。