We propose a very general framework for deriving rigorous bounds on the approximation error for physics-informed neural networks (PINNs) and operator learning architectures such as DeepONets and FNOs as well as for physics-informed operator learning. These bounds guarantee that PINNs and (physics-informed) DeepONets or FNOs will efficiently approximate the underlying solution or solution operator of generic partial differential equations (PDEs). Our framework utilizes existing neural network approximation results to obtain bounds on more involved learning architectures for PDEs. We illustrate the general framework by deriving the first rigorous bounds on the approximation error of physics-informed operator learning and by showing that PINNs (and physics-informed DeepONets and FNOs) mitigate the curse of dimensionality in approximating nonlinear parabolic PDEs.
翻译:我们提议了一个非常笼统的框架,用于对物理知情神经网络(PINNs)和操作者学习结构(如DeepONets和FNOs)的近似错误以及物理学知情操作者学习的近似错误进行严格的约束,这些界限保证PINNs和(物理知情的)DeepONets或FNOs能够有效地接近通用局部方程(PDEs)的基本解决方案或解决方案操作者。我们的框架利用现有神经网络近似结果,为PDEs获得更多相关学习结构的界限。我们通过对物理知情操作者学习的近似错误设定第一个严格的界限,并通过显示PINNs(以及物理知情的DeepONets和FNOs)将减轻近似非线性皮片PDEs对维度的诅咒,来说明总体框架。