Training deep neural networks (DNNs) can be difficult due to the occurrence of vanishing/exploding gradients during weight optimization. To avoid this problem, we propose a class of DNNs stemming from the time discretization of Hamiltonian systems. The time-invariant version of the corresponding Hamiltonian models enjoys marginal stability, a property that, as shown in previous works and for specific DNNs architectures, can mitigate convergence to zero or divergence of gradients. In the present paper, we formally study this feature by deriving and analysing the backward gradient dynamics in continuous time. The proposed Hamiltonian framework, besides encompassing existing networks inspired by marginally stable ODEs, allows one to derive new and more expressive architectures. The good performance of the novel DNNs is demonstrated on benchmark classification problems, including digit recognition using the MNIST dataset.
翻译:培训深度神经网络(DNNs)可能由于在重量优化期间出现消失/爆炸梯度而变得十分困难。为了避免这一问题,我们建议从汉密尔顿系统的时间分解中产生一类DNNs。相应的汉密尔顿模型的时差版本具有边际稳定性,正如以往的著作和特定的DNNs结构所显示的,这一属性可以减轻向零梯度或梯度差异的趋同或差异。在本文件中,我们正式研究这一特征,在连续的时间内得出和分析后向梯度动态。拟议的汉密尔顿框架除了包含由略不稳定的 ODEs 所启发的现有网络外,还允许产生新的和更清晰的结构。小的DNNs的良好表现体现在基准分类问题上,包括使用MISC数据集进行数字识别。