Backpropagation algorithm has been widely used as a mainstream learning procedure for neural networks in the past decade, and has played a significant role in the development of deep learning. However, there exist some limitations associated with this algorithm, such as getting stuck in local minima and experiencing vanishing/exploding gradients, which have led to questions about its biological plausibility. To address these limitations, alternative algorithms to backpropagation have been preliminarily explored, with the Forward-Forward (FF) algorithm being one of the most well-known. In this paper we propose a new learning framework for neural networks, namely Cascaded Forward (CaFo) algorithm, which does not rely on BP optimization as that in FF. Unlike FF, our framework directly outputs label distributions at each cascaded block, which does not require generation of additional negative samples and thus leads to a more efficient process at both training and testing. Moreover, in our framework each block can be trained independently, so it can be easily deployed into parallel acceleration systems. The proposed method is evaluated on four public image classification benchmarks, and the experimental results illustrate significant improvement in prediction accuracy in comparison with the baseline.
翻译:反向传播算法作为过去10年神经网络的主要学习程序广泛使用, 并在深度学习的发展中起到了重要作用。然而,这种算法存在一些局限性,如陷入局部极小值和梯度消失/爆炸等问题,引发了对其生物可信性的质疑。为了解决这些问题,替代反向传播的算法已经初步探索,其中前向前向(FF)算法是最著名的之一。本文提出了一种新的神经网络学习框架,即级联前向(CaFo)算法,其不依赖于BP优化,类似于FF。与FF不同的是,我们的框架直接输出每个级联块的标签分布,不需要生成额外的负样本,从而在训练和测试过程中导致更高效的过程。此外,在我们的框架中,每个块都可以独立训练,因此它可以轻松部署到并行加速系统中。所提出的方法在四个公共图像分类基准测试中进行评估,实验结果显示,与基线相比,在预测准确率方面有显着提高。