Computational advances have fundamentally transformed the landscape of numerical simulations, enabling unprecedented levels of complexity and precision in modeling physical phenomena. While these high-fidelity simulations offer invaluable insights for scientific discovery and problem solving, they impose substantial computational requirements. Consequently, low-fidelity models augmented with subgrid-scale parameterizations are employed to achieve computational feasibility. We introduce an end-to-end differentiable framework for solving the compressible Navier--Stokes equations. This integrated approach combines a differentiable discontinuous Galerkin (DG) solver with a neural network source term. Through the implementation of neural ordinary differential equations (NODEs) for network parameter optimization, our methodology ensures continuous interaction with the governing equations throughout the training process. We refer to this approach as NODE-DG. This hybrid approach combines the accuracy of numerical methods with the efficiency of machine learning, offering the following key advantages: (1) enhanced accuracy of low-order DG approximations by capturing subgrid-scale dynamics; (2) robustness to nonuniform and missing temporal data; (3) elimination of operator-splitting errors; and (4) a continuous-in-time operator enabling predictions with variable time step sizes, which accelerates projected high-order DG simulations. We demonstrate the performance of the proposed framework through two examples: two-dimensional Kelvin--Helmholtz instability and three-dimensional Taylor--Green vortex examples.
翻译:暂无翻译