The error Backpropagation algorithm (BP) is a key method for training deep neural networks. While performant, it is also resource-demanding in terms of computation, memory usage and energy. This makes it unsuitable for online learning on edge devices that require a high processing rate and low energy consumption. More importantly, BP does not take advantage of the parallelism and local characteristics offered by dedicated neural processors. There is therefore a demand for alternative algorithms to BP that could improve the latency, memory requirements, and energy footprint of neural networks on hardware. In this work, we propose a novel method based on Direct Feedback Alignment (DFA) which uses Forward-Mode Automatic Differentiation to estimate backpropagation paths and learn feedback connections in an online manner. We experimentally show that Directional DFA achieves performances that are closer to BP than other feedback methods on several benchmark datasets and architectures while benefiting from the locality and parallelization characteristics of DFA. Moreover, we show that, unlike other feedback learning algorithms, our method provides stable learning for convolution layers.
翻译:错误 背对映算法 (BP) 是培训深神经网络的关键方法 。 运行时, 它也是计算、 内存使用和能量方面的资源需求。 这使得它不适合在需要高处理率和低能源消耗的边缘设备上在线学习。 更重要的是, BP 并不利用专门的神经处理器提供的平行和本地特性。 因此, 需要向 BP 提供替代算法, 从而改进硬件上神经网络的延缓性、 记忆要求和能量足迹。 在这项工作中, 我们提出一种基于直接反馈对齐的新方法, 使用前向- 移动自动差异来估计反向调整路径和以在线方式学习反馈连接。 我们实验性地显示, DFA DFA 在几个基准数据集和架构上实现的性能比其他反馈方法更接近 BP, 同时从 DFA 的位置和平行特性中受益。 此外, 我们显示, 与其他反馈学习算法不同, 我们的方法可以稳定地学习 。