Given input-output pairs of an elliptic partial differential equation (PDE) in three dimensions, we derive the first theoretically-rigorous scheme for learning the associated Green's function $G$. By exploiting the hierarchical low-rank structure of $G$, we show that one can construct an approximant to $G$ that converges almost surely and achieves an expected relative error of $\epsilon$ using at most $\mathcal{O}(\epsilon^{-6}\log^4(1/\epsilon)/\Gamma_\epsilon)$ input-output training pairs, for any $0<\epsilon<1$. The quantity $0<\Gamma_\epsilon\leq 1$ characterizes the quality of the training dataset. Along the way, we extend the randomized singular value decomposition algorithm for learning matrices to Hilbert--Schmidt operators and characterize the quality of covariance kernels for PDE learning.
翻译:鉴于在三个维度上对椭圆部分差异方程式(PDE)的输入-输出对,我们为学习相关的Green函数制定了第一个理论上的硬性计划。通过利用低等级的G$结构,我们表明,我们能够将一个近似等于G$的近似值构建为G$,几乎可以肯定,并实现一个预期的相对差错,即$\mathcal{O}(\epsilon ⁇ -6 ⁇ log ⁇ 4(1/\\epsilon)/\Gamma ⁇ épsilon) 输入-产出培训对等,任何单位都可得到0.epsilon < $。 数量 $0+Gamma ⁇ epsilon\leq 1美元是培训数据集的质量特征。 沿着这条道路,我们将学习矩阵的随机单值解剖算算算算算算算法推广到Hilbert-Schmidt的操作者, 并描述PDE学习的常量内核素的质量。