The vanishing gradient problem was a major obstacle for the success of deep learning. In recent years it was gradually alleviated through multiple different techniques. However the problem was not really overcome in a fundamental way, since it is inherent to neural networks with activation functions based on dot products. In a series of papers, we are going to analyze alternative neural network structures which are not based on dot products. In this first paper, we revisit neural networks built up of layers based on distance measures and Gaussian activation functions. These kinds of networks were only sparsely used in the past since they are hard to train when using plain stochastic gradient descent methods. We show that by using Root Mean Square Propagation (RMSProp) it is possible to efficiently learn multi-layer neural networks. Furthermore we show that when appropriately initialized these kinds of neural networks suffer much less from the vanishing and exploding gradient problem than traditional neural networks even for deep networks.
翻译:消失的梯度问题是深层学习成功的一个主要障碍。 近年来,它通过多种不同技术逐渐得到缓解。 但是,这个问题并没有真正以根本的方式得到解决,因为它是神经网络所固有的,具有基于点产品的激活功能。 在一系列论文中,我们将分析非基于点产品的替代神经网络结构。 在第一份论文中,我们将重新审视基于距离测量和高斯激活功能而建立起来的层层神经网络。这些网络过去只是很少使用,因为它们在使用普通的随机梯度梯度下沉方法时很难训练。我们表明,通过使用根极平极平原促进(RMSProp),可以有效地学习多层神经网络。此外,我们表明,在适当初始化这些类型的神经网络时,即使对于深层网络来说,也比传统的神经网络在消失和爆发梯度问题上受到的伤害要小得多。