Supervised learning in artificial neural networks typically relies on backpropagation, where the weights are updated based on the error-function gradients and sequentially propagated from the output layer to the input layer. Although this approach has proven effective in a wide domain of applications, it lacks biological plausibility in many regards, including the weight symmetry problem, the dependence of learning on non-local signals, the freezing of neural activity during error propagation, and the update locking problem. Alternative training schemes have been introduced, including sign symmetry, feedback alignment, and direct feedback alignment, but they invariably rely on a backward pass that hinders the possibility of solving all the issues simultaneously. Here, we propose to replace the backward pass with a second forward pass in which the input signal is modulated based on the error of the network. We show that this novel learning rule comprehensively addresses all the above-mentioned issues and can be applied to both fully connected and convolutional models. We test this learning rule on MNIST, CIFAR-10, and CIFAR-100. These results help incorporate biological principles into machine learning.
翻译:在人工神经网络中,受监督的学习通常依靠背向反向调整,重量根据错误功能梯度更新,从输出层到输入层相继传播。虽然这种方法在广泛的应用领域证明有效,但在许多方面缺乏生物合理性,包括重量对称问题、学习对非局部信号的依赖、错误传播期间冻结神经活动以及更新锁定问题。引入了替代培训计划,包括符号对称、反馈对齐和直接反馈对齐,但它们总是依赖一个阻碍同时解决所有问题的可能性的后向通道。在这里,我们提议用第二个前向通道取代后向通道,输入信号根据网络的错误进行调节。我们表明,这一新学习规则全面解决了所有上述问题,可以适用于完全连接和革命模式。我们测试了MNIST、CIFAR-10和CIFAR-100的这一学习规则。这些结果有助于将生物原则纳入机器学习。