We develop a data-driven framework for discovering constitutive relations in models of fluid flow and scalar transport. Under the assumption that velocity and/or scalar fields are measured, our approach infers unknown closure terms in the governing equations as neural networks. The target to be discovered is the constitutive relations only, while the temporal derivative, convective transport terms, and pressure-gradient term in the governing equations are prescribed. The formulation is rooted in a variational principle from non-equilibrium thermodynamics, where the dynamics is defined by a free-energy functional and a dissipation functional. The unknown constitutive terms arise as functional derivatives of these functionals with respect to the state variables. To enable a flexible and structured model discovery, the free-energy and dissipation functionals are parameterized using neural networks, while their functional derivatives are obtained via automatic differentiation. This construction enforces thermodynamic consistency by design, guaranteeing monotonic decay of the total free energy and non-negative entropy production. The resulting method, termed GIMLET (Generalizable and Interpretable Model Learning through Embedded Thermodynamics), avoids reliance on a predefined library of candidate functions, unlike sparse regression or symbolic identification approaches. The learned models are generalizable in that functionals identified from one dataset can be transferred to distinct datasets governed by the same underlying equations. Moreover, the inferred free-energy and dissipation functions provide direct physical interpretability of the learned dynamics. The framework is demonstrated on several benchmark systems, including the viscous Burgers equation, the Kuramoto--Sivashinsky equation, and the incompressible Navier--Stokes equations for both Newtonian and non-Newtonian fluids.
翻译:我们开发了一个数据驱动框架,用于发现流体流动与标量输运模型中的本构关系。在假设速度场和/或标量场可测量的前提下,我们的方法将控制方程中的未知闭合项推断为神经网络。待发现的目标仅为本构关系,而控制方程中的时间导数项、对流输运项以及压力梯度项均为已知。该公式植根于非平衡热力学的变分原理,其中动力学由自由能泛函和耗散泛函定义。未知的本构项作为这些泛函关于状态变量的泛函导数而出现。为了实现灵活且结构化的模型发现,自由能与耗散泛函使用神经网络进行参数化,而其泛函导数则通过自动微分获得。这种构造在设计中强制了热力学一致性,保证了总自由能的单调衰减与非负的熵产。由此产生的方法称为GIMLET(通过嵌入式热力学实现可泛化与可解释的模型学习),它避免了像稀疏回归或符号识别方法那样依赖于预定义的候选函数库。习得的模型具有可泛化性,即从一个数据集识别出的泛函可以迁移到由相同底层方程控制的不同数据集。此外,推断出的自由能与耗散函数为习得的动力学提供了直接的物理解释性。该框架在多个基准系统上得到了验证,包括粘性Burgers方程、Kuramoto–Sivashinsky方程,以及适用于牛顿流体与非牛顿流体的不可压缩Navier–Stokes方程。