The laws of physics have been written in the language of dif-ferential equations for centuries. Neural Ordinary Differen-tial Equations (NODEs) are a new machine learning architecture which allows these differential equations to be learned from a dataset. These have been applied to classical dynamics simulations in the form of Lagrangian Neural Net-works (LNNs) and Second Order Neural Differential Equations (SONODEs). However, they either cannot represent the most general equations of motion or lack interpretability. In this paper, we propose Modular Neural ODEs, where each force component is learned with separate modules. We show how physical priors can be easily incorporated into these models. Through a number of experiments, we demonstrate these result in better performance, are more interpretable, and add flexibility due to their modularity.
翻译:物理学的定律是几个世纪以来以曲率方程式的语言书写而成的。神经普通差异-数字等式(NODEs)是一个新的机器学习结构,它使这些差异方程式能够从数据集中学习。这些都应用到古典动态模拟中,以Lagrangian Neural Net-works(LNNS)和二级神经差异等式(SONDEs)为形式。然而,它们要么不能代表最一般的运动方程式,要么缺乏可解释性。在本文中,我们建议采用模块来学习每个力成份的模块。我们展示了物理前程如何容易融入这些模型。我们通过一系列实验,展示出这些结果的更好性能,更易被解释,并因其模块性而增加灵活性。