The backpropagation algorithm is an invaluable tool for training artificial neural networks; however, because of a weight sharing requirement, it does not provide a plausible model of brain function. Here, in the context of a two-layer network, we derive an algorithm for training a neural network which avoids this problem by not requiring explicit error computation and backpropagation. Furthermore, our algorithm maps onto a neural network that bears a remarkable resemblance to the connectivity structure and learning rules of the cortex. We find that our algorithm empirically performs comparably to backprop on a number of datasets.
翻译:背反剖析算法是培训人工神经网络的宝贵工具; 但是, 由于权重共享要求, 它不能提供一个合理的大脑功能模型。 在这里, 在两层网络的背景下, 我们得出一个用于培训神经网络的算法, 通过不要求明确错误计算和反剖析来避免这一问题。 此外, 我们的算法图可以绘制到一个神经网络上, 它与连接结构和皮层的学习规则非常相似。 我们发现我们的算法在经验上可以与一些数据集的反向法相匹配。