We consider the problem of learning a linear operator $\theta$ between two Hilbert spaces from empirical observations, which we interpret as least squares regression in infinite dimensions. We show that this goal can be reformulated as an inverse problem for $\theta$ with the undesirable feature that its forward operator is generally non-compact (even if $\theta$ is assumed to be compact or of $p$-Schatten class). However, we prove that, in terms of spectral properties and regularisation theory, this inverse problem is equivalent to the known compact inverse problem associated with scalar response regression. Our framework allows for the elegant derivation of dimension-free rates for generic learning algorithms under H\"older-type source conditions. The proofs rely on the combination of techniques from kernel regression with recent results on concentration of measure for sub-exponential Hilbertian random variables. The obtained rates hold for a variety of practically-relevant scenarios in functional regression as well as nonlinear regression with operator-valued kernels and match those of classical kernel regression with scalar response.
翻译:我们考虑从两个Hilbert空间之间学习线性操作员$\theta$的问题,我们把这两个空间解释为无限维度中最小的平方回归。我们表明,这个目标可以重新表述为对$\theta$的反向问题,其不良特征是其前方操作员一般不兼容(即使假定$\theta$是紧凑的或美元-Schatten类)。然而,我们证明,在光谱属性和常规理论方面,这一反向问题相当于已知的与天平反应回归相关的紧凑反向问题。我们的框架允许在H\"older"型源条件下,对通用学习算法的无维度率进行优雅的衍生。证据依赖于内核回归技术的结合,而最近的结果则是对亚爆炸性Hilbertian随机变量的测量集中。获得的回收率在功能回归中,以及与操作员估值内核的非线性回归中,与经典内核回归相匹配。