We propose a new data-driven approach for learning the fundamental solutions (Green's functions) of various linear partial differential equations (PDEs) given sample pairs of input-output functions. Building off the theory of functional linear regression (FLR), we estimate the best-fit Green's function and bias term of the fundamental solution in a reproducing kernel Hilbert space (RKHS) which allows us to regularize their smoothness and impose various structural constraints. We derive a general representer theorem for operator RKHS' to approximate the original infinite-dimensional regression problem by a finite-dimensional one, reducing the search space to a parametric class of Green's functions. In order to study the prediction error of our Green's function estimator, we extend prior results on FLR with scalar outputs to the case with functional outputs. Finally, we demonstrate our method on several linear PDEs including the Poisson, Helmholtz, Schr\"{o}dinger, Fokker-Planck, and heat equation and highlight its robustness to noise as well as it ability to generalize to new data with varying degrees of smoothness and mesh discretization without any additional training.
翻译:我们提出了一个新的数据驱动方法,用于学习各种线性部分方程式(绿色的功能)的基本解决方案(绿色的功能),这些方程式具有输入输出功能的样本功能。根据功能线性回归理论(FLR),我们估算了在复制核心Hilbert空间(RKHS)中绿色最合适的功能和基本解决方案的偏差条件,这使我们能够规范其光滑性并施加各种结构性限制。我们为RKHS操作员提出了一个通用代表理论,用一个有限维来接近原始的无限回归问题,将搜索空间缩小到绿色功能的准度类别。为了研究我们Green函数天线性标尺的预测错误,我们用刻度输出输出来扩展FLRE的先前结果和偏差。最后,我们展示了包括Poisson、Helmholtz、Schr\\"{o}dinger、Fokker-Planck和热方程式在内的多个线性PDE方法,并突出其稳健度,因为它能够以不同程度的离层数据全面整合到新的数据。