Physics-informed neural networks (PINNs) are an increasingly powerful way to solve partial differential equations, generate digital twins, and create neural surrogates of physical models. In this manuscript we detail the inner workings of NeuralPDE.jl and show how a formulation structured around numerical quadrature gives rise to new loss functions which allow for adaptivity towards bounded error tolerances. We describe the various ways one can use the tool, detailing mathematical techniques like using extended loss functions for parameter estimation and operator discovery, to help potential users adopt these PINN-based techniques into their workflow. We showcase how NeuralPDE uses a purely symbolic formulation so that all of the underlying training code is generated from an abstract formulation, and show how to make use of GPUs and solve systems of PDEs. Afterwards we give a detailed performance analysis which showcases the trade-off between training techniques on a large set of PDEs. We end by focusing on a complex multiphysics example, the Doyle-Fuller-Newman (DFN) Model, and showcase how this PDE can be formulated and solved with NeuralPDE. Together this manuscript is meant to be a detailed and approachable technical report to help potential users of the technique quickly get a sense of the real-world performance trade-offs and use cases of the PINN techniques.
翻译:物理知情神经网络(PINNs)是解决部分差异方程式、产生数字双胞胎和创建物理模型神经代谢器的日益强大的方法。 在这个手稿中,我们详细介绍了神经PDE.jl 的内部功能,并展示了围绕数字二次曲线结构的配方如何产生新的损失功能,从而能够适应受约束的误差容忍度。我们描述了各种可以使用该工具的方法,详细介绍了数学技术,例如使用参数估计和操作员发现方面的延长损失功能,以帮助潜在用户将这些基于PINN的技术纳入其工作流程。我们展示了NeuralPDE 如何使用纯粹象征性的配方,以便所有基本的培训代码都是从抽象的配方生成的,并展示了如何使用GPUPS和解析 PDEs系统。之后,我们进行了详细的性能分析,展示了大量PDES的训练技术之间的交换。我们最后侧重于复杂的多物理模型,即道尔-富勒-纽曼(DFN)模型,并展示了如何与NeuralPDPDE用户一道制定和解算出一个可以帮助的方法。