We study the Bayesian inverse problem of learning a linear operator on a Hilbert space from its noisy pointwise evaluations on random input data. Our framework assumes that this target operator is self-adjoint and diagonal in a basis shared with the Gaussian prior and noise covariance operators arising from the imposed statistical model and is able to handle target operators that are compact, bounded, or even unbounded. We establish posterior contraction rates with respect to a family of Bochner norms as the number of data tend to infinity and derive related lower bounds on the estimation error. In the large data limit, we also provide asymptotic convergence rates of suitably defined excess risk and generalization gap functionals associated with the posterior mean point estimator. In doing so, we connect the posterior consistency results to nonparametric learning theory. Furthermore, these convergence rates highlight and quantify the difficulty of learning unbounded linear operators in comparison with the learning of bounded or compact ones. Numerical experiments confirm the theory and demonstrate that similar conclusions may be expected in more general problem settings.
翻译:我们从对随机输入数据的吵闹点评价中,研究希尔伯特空间线性操作员学习线性操作员的巴伊西亚反向问题。我们的框架假定,这个目标操作员在与高山先前和噪音共变操作员共享的基础上,在与强加的统计模型产生的先前和噪音共变操作员共享的基础上,是自我联合和对角操作员的,并且能够处理紧凑、约束甚至无约束的目标操作员。我们从波克纳规范的大家庭中确定后继收缩率,因为数据数量往往难以捉摸,并且对估计错误得出相关的较低界限。在大的数据限制中,我们还提供了与远地点平均估计标准有关的、定义得当的超风险和一般化差距等功能的对准合并率。在这样做时,我们把后端一致性结果与非参数学习理论联系起来。此外,这些趋同率突出并量化了学习无约束线性操作员与学习约束性或紧凑性操作员的困难。数字实验证实了这一理论,并表明在更一般的问题环境中可以预期到类似的结论。