Existing analyses of optimization in deep learning are either continuous, focusing on (variants of) gradient flow, or discrete, directly treating (variants of) gradient descent. Gradient flow is amenable to theoretical analysis, but is stylized and disregards computational efficiency. The extent to which it represents gradient descent is an open question in the theory of deep learning. The current paper studies this question. Viewing gradient descent as an approximate numerical solution to the initial value problem of gradient flow, we find that the degree of approximation depends on the curvature around the gradient flow trajectory. We then show that over deep neural networks with homogeneous activations, gradient flow trajectories enjoy favorable curvature, suggesting they are well approximated by gradient descent. This finding allows us to translate an analysis of gradient flow over deep linear neural networks into a guarantee that gradient descent efficiently converges to global minimum almost surely under random initialization. Experiments suggest that over simple deep neural networks, gradient descent with conventional step size is indeed close to gradient flow. We hypothesize that the theory of gradient flows will unravel mysteries behind deep learning.
翻译:深层学习的现有优化分析要么是连续的, 侧重于( 梯度流的变量), 要么是离散的, 直接处理( 梯度下降的变量) 。 梯度流可以进行理论分析, 但却是星系化的, 并且无视计算效率 。 它代表梯度下降的程度是深层学习理论中的一个未决问题 。 目前的论文研究这一问题 。 将梯度下降视为梯度流初始值问题的一个大致数字解决方案, 我们发现, 接近度的程度取决于梯度流轨道周围的曲度。 我们然后显示, 以同质活性、 梯度流轨轨轨迹为主的深神经网络, 享受着有利的曲度。 这让我们能够将深度线性神经网络的梯度流分析转化为保证梯度下降在随机初始化下能有效地与全球最小值一致。 实验显示, 超过简单深层神经网络的梯度下降速度确实接近于梯度流。 我们假设, 梯度流的理论将会消除深层学习之后的神秘性 。