We propose a new approach to learning the subgrid-scale model when simulating partial differential equations (PDEs) solved by the method of lines and their representation in chaotic ordinary differential equations, based on neural ordinary differential equations (NODEs). Solving systems with fine temporal and spatial grid scales is an ongoing computational challenge, and closure models are generally difficult to tune. Machine learning approaches have increased the accuracy and efficiency of computational fluid dynamics solvers. In this approach neural networks are used to learn the coarse- to fine-grid map, which can be viewed as subgrid-scale parameterization. We propose a strategy that uses the NODE and partial knowledge to learn the source dynamics at a continuous level. Our method inherits the advantages of NODEs and can be used to parameterize subgrid scales, approximate coupling operators, and improve the efficiency of low-order solvers. Numerical results with the two-scale Lorenz 96 ODE, the convection-diffusion PDE, and the viscous Burgers' PDE are used to illustrate this approach.
翻译:我们提出了一种新的方法来学习部分微分方程(PDEs)通过线性方法求解以及它们在混沌普通微分方程中的表示,基于神经常微分方程(NODEs)解决时的亚网格尺度模型。解决具有细微时间和空间网格尺度的系统是一个持续的计算挑战,而闭合模型通常难以调整。机器学习方法提高了计算流体力学求解器的精度和效率。在这种方法中,神经网络被用于学习从粗到细网格的映射,可以视为亚网格尺度参数化的一种。我们提出了一种策略,利用NODE和部分知识来学习连续级别上的源动态。我们的方法继承了NODES的优点,并可用于参数化亚网格尺度,近似耦合算子以及提高低阶求解器的效率。数值结果与两级Lorenz 96 ODE、对流扩散PDE和粘性Burgers' PDE一起用于说明这种方法。