Physics-informed deep learning often faces optimization challenges due to the complexity of solving partial differential equations (PDEs), which involve exploring large solution spaces, require numerous iterations, and can lead to unstable training. These challenges arise particularly from the ill-conditioning of the optimization problem caused by the differential terms in the loss function. To address these issues, we propose learning a solver, i.e., solving PDEs using a physics-informed iterative algorithm trained on data. Our method learns to condition a gradient descent algorithm that automatically adapts to each PDE instance, significantly accelerating and stabilizing the optimization process and enabling faster convergence of physics-aware models. Furthermore, while traditional physics-informed methods solve for a single PDE instance, our approach extends to parametric PDEs. Specifically, we integrate the physical loss gradient with PDE parameters, allowing our method to solve over a distribution of PDE parameters, including coefficients, initial conditions, and boundary conditions. We demonstrate the effectiveness of our approach through empirical experiments on multiple datasets, comparing both training and test-time optimization performance. The code is available at https://github.com/2ailesB/neural-parametric-solver.
翻译:物理信息深度学习在求解偏微分方程(PDEs)时常常面临优化挑战,这源于求解过程需要探索巨大的解空间、进行大量迭代,并可能导致训练不稳定。这些挑战尤其源于损失函数中微分项导致的优化问题病态性。为解决这些问题,我们提出学习一个求解器,即使用基于数据训练的物理信息迭代算法来求解偏微分方程。我们的方法学习如何调节一个梯度下降算法,使其能自动适应每个偏微分方程实例,从而显著加速和稳定优化过程,并实现物理感知模型的更快收敛。此外,传统的物理信息方法通常针对单个偏微分方程实例求解,而我们的方法可扩展到参数化偏微分方程。具体而言,我们将物理损失梯度与偏微分方程参数(包括系数、初始条件和边界条件)相结合,使我们的方法能够求解参数分布上的偏微分方程。我们通过在多个数据集上的实证实验,比较训练和测试阶段的优化性能,证明了我们方法的有效性。代码可在 https://github.com/2ailesB/neural-parametric-solver 获取。