Machine learning methods are widely used in the natural sciences to model and predict physical systems from observation data. Yet, they are often used as poorly understood "black boxes," disregarding existing mathematical structure and invariants of the problem. Recently, the proposal of Hamiltonian Neural Networks (HNNs) took a first step towards a unified "gray box" approach, using physical insight to improve performance for Hamiltonian systems. In this paper, we explore a significantly improved training method for HNNs, exploiting the symplectic structure of Hamiltonian systems with a different loss function. This frees the loss from an artificial lower bound. We mathematically guarantee the existence of an exact Hamiltonian function which the HNN can learn. This allows us to prove and numerically analyze the errors made by HNNs which, in turn, renders them fully explainable. Finally, we present a novel post-training correction to obtain the true Hamiltonian only from discretized observation data, up to an arbitrary order.
翻译:在自然科学中,机器学习方法被广泛用于从观测数据中模拟和预测物理系统。然而,这些方法往往被作为不甚理解的“黑盒子”而使用,而忽略了现有的数学结构和问题的各种变量。最近,汉密尔顿神经网络(HNNs)的建议迈出了第一步,朝着统一的“灰盒”方法迈出了第一步,利用物理洞察力改进汉密尔顿系统的工作表现。在这份文件中,我们探索了大大改进的HNPs培训方法,利用了汉密尔顿系统具有不同损失功能的随机结构。这从人工下层中排除了损失。我们在数学上保证汉密尔顿系统能够学习的确切功能的存在。这使我们能够证明并用数字分析汉密尔顿网络的错误,而这种错误反过来又使汉密尔顿系统完全可以解释。最后,我们提出了一个新的培训后纠正方法,以便从离散的观测数据中获取真正的汉密尔顿人,直到任意的顺序。