The de facto algorithm for training the back pass of a feedforward neural network is backpropagation (BP). The use of almost-everywhere differentiable activation functions made it efficient and effective to propagate the gradient backwards through layers of deep neural networks. However, in recent years, there has been much research in alternatives to backpropagation. This analysis has largely focused on reaching state-of-the-art accuracy in multilayer perceptrons (MLPs) and convolutional neural networks (CNNs). In this paper, we analyze the stability and similarity of predictions and neurons in MLPs and propose a new variation of one of the algorithms.
翻译:培养进料神经网络后传的事实上的算法是反向反向传播(BP)。使用几乎所有不同激活功能都使得通过深层神经网络向后传播梯度变得有效率和有效。然而,近年来,对反向传播的替代方法进行了大量研究。这一分析主要侧重于在多层感应器和进量神经网络中达到最新准确度。在本文中,我们分析了多层神经网络预测和神经元的稳定性和相似性,并提出了一种新的算法变换。