Traditional, numerical discretization-based solvers of partial differential equations (PDEs) are fundamentally agnostic to domains, boundary conditions and coefficients. In contrast, machine learnt solvers have a limited generalizability across these elements of boundary value problems. This is strongly true in the case of surrogate models that are typically trained on direct numerical simulations of PDEs applied to one specific boundary value problem. In a departure from this direct approach, the label-free machine learning of solvers is centered on a loss function that incorporates the PDE and boundary conditions in residual form. However, their generalization across boundary conditions is limited and they remain strongly domain-dependent. Here, we present a framework that generalizes across domains, boundary conditions and coefficients simultaneously with learning the PDE in weak form. Our work explores the ability of simple, convolutional neural network (CNN)-based encoder-decoder architectures to learn to solve a PDE in greater generality than its restriction to a particular boundary value problem. In this first communication, we consider the elliptic PDEs of Fickean diffusion, linear and nonlinear elasticity. Importantly, the learning happens independently of any labelled field data from either experiments or direct numerical solutions. Extensive results for these problem classes demonstrate the framework's ability to learn PDE solvers that generalize across hundreds of thousands of domains, boundary conditions and coefficients, including extrapolation beyond the learning regime. Once trained, the machine learning solvers are orders of magnitude faster than discretization-based solvers. We place our work in the context of recent continuous operator learning frameworks, and note extensions to transfer learning, active learning and reinforcement learning.
翻译:偏差方程式( PDEs) 的基于数字的离散传统解答器基本上对域、 边界条件和系数具有不可知性。 相反, 机器学习的解答器在边界值问题的这些要素中具有有限的通用性。 在典型的代理模型中,这些模型在对一个特定的边界值问题应用PDE的直接数字模拟方面受过典型的培训。 在脱离这一直接方法的情况下, 解答器的无标签机学习集中在一个包含PDE和边界条件的剩余形式的损失函数上。 然而, 跨边界条件的概括性有限,而且仍然高度依赖域级指令。 这里, 我们提出了一个框架, 将跨域、 边界条件和系数的概括性加以概括化。 我们的工作探索了简单、 革命性神经网络( CN) 的解析解析结构的能力, 与对特定边界值问题的限制相比, 我们第一次交流时, 我们认为 Fickean 的解算法的扩展性、 线性和 系数 连续性实验性 和不连续性实验性 的磁带 学习 。 我们的工作框架 学习这些直流化的实地学习 学习 。