We propose a new approach to learning the subgrid-scale model when simulating partial differential equations (PDEs) solved by the method of lines and their representation in chaotic ordinary differential equations, based on neural ordinary differential equations (NODEs). Solving systems with fine temporal and spatial grid scales is an ongoing computational challenge, and closure models are generally difficult to tune. Machine learning approaches have increased the accuracy and efficiency of computational fluid dynamics solvers. In this approach neural networks are used to learn the coarse- to fine-grid map, which can be viewed as subgrid-scale parameterization. We propose a strategy that uses the NODE and partial knowledge to learn the source dynamics at a continuous level. Our method inherits the advantages of NODEs and can be used to parameterize subgrid scales, approximate coupling operators, and improve the efficiency of low-order solvers. Numerical results with the two-scale Lorenz 96 ODE, the convection-diffusion PDE, and the viscous Burgers' PDE are used to illustrate this approach.
翻译:我们提出了一种基于神经常微分方程(NODE)的新方法,用于在混沌常微分方程中建模部分微分方程(PDEs)求解时学习子网格规模模型。求解细时间和空间网格尺度的系统是一个持续的计算挑战,而闭合模型通常很难调整。机器学习方法已经提高了计算流体动力学求解器的精度和效率。在这种方法中,神经网络被用来学习从粗到细网格的映射,这可以被视为子网格尺度参数化。我们提出了一种策略,利用 NODE 和部分知识来学习连续水平上的源动力学。我们的方法继承了 NODE 的优点,并可用于参数化子网格尺度、近似耦合算子,以及提高低阶求解器的效率。用两尺度的 Lorenz 96 ODE、对流扩散 PDE 和粘性 Burgers' PDE 的数值结果来说明这种方法。