Given input-output pairs of an elliptic partial differential equation (PDE) in three dimensions, we derive the first theoretically-rigorous scheme for learning the associated Green's function $G$. By exploiting the hierarchical low-rank structure of $G$, we show that one can construct an approximant to $G$ that converges almost surely and achieves a relative error of $\mathcal{O}(\Gamma_\epsilon^{-1/2}\log^3(1/\epsilon)\epsilon)$ using at most $\mathcal{O}(\epsilon^{-6}\log^4(1/\epsilon))$ input-output training pairs with high probability, for any $0<\epsilon<1$. The quantity $0<\Gamma_\epsilon\leq 1$ characterizes the quality of the training dataset. Along the way, we extend the randomized singular value decomposition algorithm for learning matrices to Hilbert--Schmidt operators and characterize the quality of covariance kernels for PDE learning.
翻译:鉴于在三个维度上对椭圆部分差异方程式(PDE)的输入-输出配对,我们获得了第一个用于学习Green相关功能的理论上的硬性计划。通过利用G$的低等级结构,我们显示,我们可以建造一个接近G$的相等-输出方程式,几乎可以肯定地聚合起来,并实现一个相对错误$\mathcal{O}(Gamma ⁇ epsilon}1/2/2 ⁇ log}3(1/\epsilon)\epsilon)$(美元),最多使用$\mathcal{O}(\epsilon ⁇ _6 ⁇ log}4(1/\epsilon))$(美元)来学习G$0 ⁇ epsilon <1美元。数量是培训数据集的质量。与此同时,我们将随机化的单价解配置算法推广到希尔伯特-施密特操作者,并描述PDE学习的可变性内核质量。