We propose a new data-driven approach for learning the fundamental solutions (Green's functions) of various linear partial differential equations (PDEs) given sample pairs of input-output functions. Building off the theory of functional linear regression (FLR), we estimate the best-fit Green's function and bias term of the fundamental solution in a reproducing kernel Hilbert space (RKHS) which allows us to regularize their smoothness and impose various structural constraints. We derive a general representer theorem for operator RKHSs to approximate the original infinite-dimensional regression problem by a finite-dimensional one, reducing the search space to a parametric class of Green's functions. In order to study the prediction error of our Green's function estimator, we extend prior results on FLR with scalar outputs to the case with functional outputs. Finally, we demonstrate our method on several linear PDEs including the Poisson, Helmholtz, Schr\"{o}dinger, Fokker-Planck, and heat equation. We highlight its robustness to noise as well as its ability to generalize to new data with varying degrees of smoothness and mesh discretization without any additional training.
翻译:我们提出了一种新的数据驱动方法,用于学习给定输入输出函数样本对的各种线性偏微分方程(PDE)的基本解( Green函数) 。建立在函数线性回归(FLR)理论的基础上,我们在表示核希尔伯特空间(RKHS)中估计最佳拟合 Green函数和基项,该空间允许我们对其平滑性进行正则化并施加各种结构约束。我们为算子 RKHS 推导了一个通用的解释器定理,以通过有限维问题逼近原始无限维回归问题,从而将搜索空间减少为一类参数化的 Green 函数。为了研究我们的 Green 函数估计器的预测误差,我们将先前针对标量输出的 FLR 结果扩展到具有功能输出的情况。最后,我们展示了我们的方法用于几个线性 PDE,包括 Poisson,Helmholtz,Schr\"{o}dinger,Fokker -Planck 和热方程。我们强调了它对噪声的鲁棒性,以及在没有任何额外的训练的情况下,它推广到具有不同平滑度和网格离散度的新数据的能力。