Many of the recent advances in the field of artificial intelligence have been fueled by the highly successful backpropagation of error (BP) algorithm, which efficiently solves the credit assignment problem in artificial neural networks. However, it is unlikely that BP is implemented in its usual form within biological neural networks, because of its reliance on non-local information in propagating error gradients. Since biological neural networks are capable of highly efficient learning and responses from BP trained models can be related to neural responses, it seems reasonable that a biologically viable approximation of BP underlies synaptic plasticity in the brain. Gradient-adjusted incremental target propagation (GAIT-prop or GP for short) has recently been derived directly from BP and has been shown to successfully train networks in a more biologically plausible manner. However, so far, GP has only been shown to work on relatively low-dimensional problems, such as handwritten-digit recognition. This work addresses some of the scaling issues in GP and shows it to perform effective multi-layer credit assignment in deeper networks and on the much more challenging ImageNet dataset.
翻译:人工智能领域最近取得的许多进展都得益于高度成功的回馈错误(BP)算法,这种算法有效地解决了人工神经网络中的信用分配问题;然而,由于在传播错误梯度方面依赖非本地信息,因此不可能在生物神经网络中以通常的形式实施BP;由于生物神经网络能够高效地学习,而且BP所培训模型的反应可能与神经反应有关,因此,从生物角度上可行的BP基件接近大脑中合成性可塑性似乎是合理的;经过渐进调整的递增目标传播(GAIT-prop或GP用于短时间)最近直接来自BP,并证明以更生物上合理的方式成功地培训了网络;然而,迄今为止,GP只是证明在相对低的维度问题上开展了工作,例如手写数字识别。这项工作解决了GP中的一些规模问题,并表明它在更深的网络和更具挑战性的图像网络数据集中开展了有效的多层次信用分配。