Deep Neural Networks (DNNs) training can be difficult due to vanishing and exploding gradients during weight optimization through backpropagation. To address this problem, we propose a general class of Hamiltonian DNNs (H-DNNs) that stem from the discretization of continuous-time Hamiltonian systems and include several existing architectures based on ordinary differential equations. Our main result is that a broad set of H-DNNs ensures non-vanishing gradients by design for an arbitrary network depth. This is obtained by proving that, using a semi-implicit Euler discretization scheme, the backward sensitivity matrices involved in gradient computations are symplectic. We also provide an upper bound to the magnitude of sensitivity matrices, and show that exploding gradients can be either controlled through regularization or avoided for special architectures. Finally, we enable distributed implementations of backward and forward propagation algorithms in H-DNNs by characterizing appropriate sparsity constraints on the weight matrices. The good performance of H-DNNs is demonstrated on benchmark classification problems, including image classification with the MNIST dataset.
翻译:深神经网络(DNNs)培训可能由于在通过反向调整来优化重量时,梯度优化时会消失和爆炸梯度而变得困难。为了解决这个问题,我们提出一个汉密尔顿-DNNs(H-DNNs)一般类别,该类别源于连续的汉密尔顿系统离散,包括基于普通差分方程的若干现有结构。我们的主要结果是,一大批H-DNNs(H-DNs)能够通过设计任意的网络深度,确保非加速梯度。这要通过证明使用半不完全的电离散办法,梯度计算中的后向敏感矩阵是随机的。我们还提供了一个与敏感矩阵大小相适应的上限,并表明爆炸梯度可以通过正规化或避免特殊结构来控制。最后,我们通过对重量矩阵上的适当宽度限制,使H-DNNNs的后向和前传播算法得以分散实施。H-DNNs的良好表现体现在基准分类问题,包括与MNIST数据集的图像分类。