反向传播一词严格来说仅指用于计算梯度的算法,而不是指如何使用梯度。但是该术语通常被宽松地指整个学习算法,包括如何使用梯度,例如通过随机梯度下降。反向传播将增量计算概括为增量规则中的增量规则,该规则是反向传播的单层版本,然后通过自动微分进行广义化,其中反向传播是反向累积(或“反向模式”)的特例。 在机器学习中,反向传播(backprop)是一种广泛用于训练前馈神经网络以进行监督学习的算法。对于其他人工神经网络(ANN)都存在反向传播的一般化–一类算法,通常称为“反向传播”。反向传播算法的工作原理是,通过链规则计算损失函数相对于每个权重的梯度,一次计算一层,从最后一层开始向后迭代,以避免链规则中中间项的冗余计算。

VIP内容

图神经网络(GNN)是一类基于深度学习的处理图域信息的方法,它通过将图广播操作和深度学习算法结合,可以让图的结构信息和顶点属性信息都参与到学习中,在顶点分类、图分类、链接预测等应用中表现出良好的效果和可解释性,已成为一种广泛应用的图分析方法.然而现有主流的深度学习框架(如Tensorflow、PyTorch等)没有为图神经网络计算提供高效的存储支持和图上的消息传递支持,这限制了图神经网络算法在大规模图数据上的应用.目前已有诸多工作针对图结构的数据特点和图神经网络的计算特点,探索了大规模图神经网络系统的设计和实现方案.本文首先对图神经网络的发展进行简要概述,总结了设计图神经网络系统需要面对的挑战;随后对目前图神经网络系统的工作进行介绍,从系统架构、编程模型、消息传递优化、图分区策略、通信优化等多个方面对系统进行分析;最后使用部分已开源的图神经网络系统进行实验评估,从精确度、性能、扩展性等多个方面验证这些系统的有效性.

http://www.jos.org.cn/jos/ch/reader/view_abstract.aspx?file_no=6311

成为VIP会员查看完整内容
0
61

最新内容

Optimization in Deep Learning is mainly dominated by first-order methods which are built around the central concept of backpropagation. Second-order optimization methods, which take into account the second-order derivatives are far less used despite superior theoretical properties. This inadequacy of second-order methods stems from its exorbitant computational cost, poor performance, and the ineluctable non-convex nature of Deep Learning. Several attempts were made to resolve the inadequacy of second-order optimization without reaching a cost-effective solution, much less an exact solution. In this work, we show that this long-standing problem in Deep Learning could be solved in the stochastic case, given a suitable regularization of the neural network. Interestingly, we provide an expression of the stochastic Hessian and its exact eigenvalues. We provide a closed-form formula for the exact stochastic second-order Newton direction, we solve the non-convexity issue and adjust our exact solution to favor flat minima through regularization and spectral adjustment. We test our exact stochastic second-order method on popular datasets and reveal its adequacy for Deep Learning.

0
0
下载
预览

最新论文

Optimization in Deep Learning is mainly dominated by first-order methods which are built around the central concept of backpropagation. Second-order optimization methods, which take into account the second-order derivatives are far less used despite superior theoretical properties. This inadequacy of second-order methods stems from its exorbitant computational cost, poor performance, and the ineluctable non-convex nature of Deep Learning. Several attempts were made to resolve the inadequacy of second-order optimization without reaching a cost-effective solution, much less an exact solution. In this work, we show that this long-standing problem in Deep Learning could be solved in the stochastic case, given a suitable regularization of the neural network. Interestingly, we provide an expression of the stochastic Hessian and its exact eigenvalues. We provide a closed-form formula for the exact stochastic second-order Newton direction, we solve the non-convexity issue and adjust our exact solution to favor flat minima through regularization and spectral adjustment. We test our exact stochastic second-order method on popular datasets and reveal its adequacy for Deep Learning.

0
0
下载
预览
Top