Direct Feedback Alignment (DFA) is emerging as an efficient and biologically plausible alternative to the ubiquitous backpropagation algorithm for training deep neural networks. Despite relying on random feedback weights for the backward pass, DFA successfully trains state-of-the-art models such as Transformers. On the other hand, it notoriously fails to train convolutional networks. An understanding of the inner workings of DFA to explain these diverging results remains elusive. Here, we propose a theory for the success of DFA. We first show that learning in shallow networks proceeds in two steps: an alignment phase, where the model adapts its weights to align the approximate gradient with the true gradient of the loss function, is followed by a memorisation phase, where the model focuses on fitting the data. This two-step process has a degeneracy breaking effect: out of all the low-loss solutions in the landscape, a network trained with DFA naturally converges to the solution which maximises gradient alignment. We also identify a key quantity underlying alignment in deep linear networks: the conditioning of the alignment matrices. The latter enables a detailed understanding of the impact of data structure on alignment, and suggests a simple explanation for the well-known failure of DFA to train convolutional neural networks. Numerical experiments on MNIST and CIFAR10 clearly demonstrate degeneracy breaking in deep non-linear networks and show that the align-then-memorize process occurs sequentially from the bottom layers of the network to the top.
翻译:直接反馈调整(DFA) 正在形成一种高效和生物上可信的替代方法,以取代对深神经网络进行培训的无处不在的反向反向演算法。 尽管DFA依靠对后转路的随机反馈权重, 但它成功地培训了最先进的模型, 如变异器。 另一方面, 它臭名昭著地未能培训连锁网络。 对DFA解释这些差异结果的内在作用的理解仍然渺茫。 在这里, 我们为DFA的成功提出了一个理论。 我们首先显示浅网络的学习分为两个步骤: 一个调整阶段, 模型调整其权重, 使大约的梯度与损失函数的真正梯度相匹配。 之后, DFA 成功地进行了一个记忆化阶段, 模型注重于安装数据。 这个两步进程具有退化性分解效应: 超越了DFAA 的低损失解决方案, 一个经过DFA 自然训练的网络与最著名的梯度调整解决方案相融合。 我们还从深线网络中找出一个关键的调整基础: 校准非调整矩阵矩阵矩阵矩阵矩阵的调整, 后, 使FAAAR 的升级 的升级 结构 演示结构 演示结构 演示的升级结构 。