Backpropagation of error (BP) is a widely used and highly successful learning algorithm. However, its reliance on non-local information in propagating error gradients makes it seem an unlikely candidate for learning in the brain. In the last decade, a number of investigations have been carried out focused upon determining whether alternative more biologically plausible computations can be used to approximate BP. This work builds on such a local learning algorithm - Gradient Adjusted Incremental Target Propagation (GAIT-prop) - which has recently been shown to approximate BP in a manner which appears biologically plausible. This method constructs local, layer-wise weight update targets in order to enable plausible credit assignment. However, in deep networks, the local weight updates computed by GAIT-prop can deviate from BP for a number of reasons. Here, we provide and test methods to overcome such sources of error. In particular, we adaptively rescale the locally-computed errors and show that this significantly increases the performance and stability of the GAIT-prop algorithm when applied to the CIFAR-10 dataset.
翻译:错误的反向分析(BP)是一种广泛使用和非常成功的学习算法(BP),但在传播错误梯度时依赖非本地信息,这似乎不大可能成为在大脑中学习的候选对象。在过去十年中,已经进行了一些调查,重点是确定是否可以使用生物上更合理的替代计算方法来接近BP。这项工作以这种本地学习算法 -- -- 渐进调整增量目标推进法(GAIT-prop) -- -- 为基础,最近显示这种算法以生物上看似合理的方式接近BP。这种方法构建了本地的、分层加权更新目标,以便能够进行可信的信用分配。然而,在深网络中,GAIT-prop计算的地方加权更新可以因若干原因而偏离BP。在这里,我们提供和测试方法来克服这种错误源。特别是,我们调整了当地计算错误的规模,并表明,在应用CIFAR-10数据集时,这大大提高了GIT-prop算法的性能和稳定性。