Learning mappings between infinite-dimensional function spaces has achieved empirical success in many disciplines of machine learning, including generative modeling, functional data analysis, causal inference, and multi-agent reinforcement learning. In this paper, we study the statistical limit of learning a Hilbert-Schmidt operator between two infinite-dimensional Sobolev reproducing kernel Hilbert spaces. We establish the information-theoretic lower bound in terms of the Sobolev Hilbert-Schmidt norm and show that a regularization that learns the spectral components below the bias contour and ignores the ones that are above the variance contour can achieve the optimal learning rate. At the same time, the spectral components between the bias and variance contours give us flexibility in designing computationally feasible machine learning algorithms. Based on this observation, we develop a multilevel kernel operator learning algorithm that is optimal when learning linear operators between infinite-dimensional function spaces.
翻译:无限功能空间之间的学习绘图在机器学习的许多学科中取得了经验性的成功,包括基因模型、功能数据分析、因果推断和多剂强化学习。在本文中,我们研究了在两个无限的Sobolev 复制Hilbert 核心空间之间学习Hilbert-Schmidt操作员的统计限制。我们根据Sobolev Hilbert-Schmidt 规范建立了信息-理论下限,并表明了解偏差等距下方的光谱组件和忽略偏差等距以上部分的常规化能够达到最佳学习速度。同时,偏差和偏差等距之间的光谱部分使我们在设计计算上可行的机器学习算法方面具有灵活性。基于这一观察,我们开发了一个多层次的内核操作员学习算法,当学习无限功能空间之间的线性操作员时,这种算法是最佳的。