We propose a new data-driven approach for learning the fundamental solutions (i.e. Green's functions) of various linear partial differential equations (PDEs) given sample pairs of input-output functions. Building off of the theory of functional linear regression (FLR), we estimate the best-fit Green's function and bias term of the fundamental solution in a reproducing kernel Hilbert space (RKHS) which allows us to regularize their smoothness and impose various structural constraints. We derive a general representer theorem for operator RKHSs which lets us approximate the original infinite-dimensional regression problem by a finite-dimensional one, reducing the search space to a parametric class of Green's functions. In order to study the prediction error of our Green's function estimator, we extend prior results on FLR with scalar outputs to the case with functional outputs. Furthermore, our rates of convergence hold even in the misspecified setting when the data is generated by a nonlinear PDE under certain constraints. Finally, we demonstrate applications of our method to several linear PDEs including the Poisson, Helmholtz, Schr\"odinger, Fokker-Planck, and heat equation and highlight its ability to extrapolate to more finely sampled meshes without any additional training.
翻译:我们提出一种新的数据驱动方法,以学习各种线性部分差异方程式(即Green的功能)的基本解决方案(即Green's 函数),这些方程式具有输入输出功能的样本功能。根据功能线性回归理论(FLR),我们估算了绿色在复制内核希尔伯特空间(RKHS)中最合适的功能和基本解决方案的偏差术语,这使我们能够规范其光滑性并施加各种结构性限制。我们为操作者RKHS得出了一个一般代表器理论,它让我们通过一个有限维度的函数来比较原始无限的回归问题,将搜索空间缩小到绿色功能的准度类别。为了研究我们绿色函数估量器的预测错误,我们将具有伸缩输出的FLRF的先前结果扩展到功能性输出的案例中。此外,当数据是由非线性PDE在一定的限制下生成时,我们的趋同率甚至维持在错误的设置中。最后,我们展示了我们的方法对若干线性PDE的应用,包括Poisson,Holpholholtzl 和Focalcalexexexpaltaltal totoketaltaltal。