This paper studies the learning of linear operators between infinite-dimensional Hilbert spaces. The training data comprises pairs of random input vectors in a Hilbert space and their noisy images under an unknown self-adjoint linear operator. Assuming that the operator is diagonalizable in a known basis, this work solves the equivalent inverse problem of estimating the operator's eigenvalues given the data. Adopting a Bayesian approach, the theoretical analysis establishes posterior contraction rates in the infinite data limit with Gaussian priors that are not directly linked to the forward map of the inverse problem. The main results also include learning-theoretic generalization error guarantees for a wide range of distribution shifts. These convergence rates quantify the effects of data smoothness and true eigenvalue decay or growth, for compact or unbounded operators, respectively, on sample complexity. Numerical evidence supports the theory in diagonal and non-diagonal settings.
翻译:本文研究无限维度希尔伯特空域之间线性操作员的学习。 培训数据包含Hilbert空域中随机输入矢量的对数及其在未知的自对接线性操作员下响亮的图像。 假设操作员在已知的基础上可以进行对数, 这项工作解决了根据数据估计操作员的天值的对应反向问题。 采用巴伊西亚方法, 理论分析在无限数据限值中确定了与逆向问题前方地图没有直接联系的高斯前端数据后端收缩率。 主要结果还包括对一系列分布转移的学习理论性一般错误保证。 这些趋同率分别量化了数据光度和真正的天值衰减或生长对光或无边形操作员的抽样复杂性的影响。 数字证据支持了二对角和非对角环境的理论。