A fundamental problem in manifold learning is to approximate a functional relationship in a data chosen randomly from a probability distribution supported on a low dimensional sub-manifold of a high dimensional ambient Euclidean space. The manifold is essentially defined by the data set itself and, typically, designed so that the data is dense on the manifold in some sense. The notion of a data space is an abstraction of a manifold encapsulating the essential properties that allow for function approximation. The problem of transfer learning (meta-learning) is to use the learning of a function on one data set to learn a similar function on a new data set. In terms of function approximation, this means lifting a function on one data space (the base data space) to another (the target data space). This viewpoint enables us to connect some inverse problems in applied mathematics (such as inverse Radon transform) with transfer learning. In this paper we examine the question of such lifting when the data is assumed to be known only on a part of the base data space. We are interested in determining subsets of the target data space on which the lifting can be defined, and how the local smoothness of the function and its lifting are related.
翻译:多重学习的一个根本问题是,在从一个高维环境Euclidean 空间的低维次元分层上支持的概率分布随机选择的数据中,如何大致地估计一种功能关系。该元基本上由数据集本身界定,通常设计得使数据在某些意义上密集于元数。数据空间的概念是一个包含功能近似基本特性的多元包体的抽象概念。传输学习(元数据学习)的问题是利用一个数据集的函数来学习新数据集的类似函数。在功能近似方面,这意味着将一个数据空间(基数据空间)的功能提升到另一个数据空间(目标数据空间)。这个观点使我们能够将应用数学中的一些问题(如反拉德翁变换)与转移学习联系起来。在假定数据只存在于基础数据空间的一部分时才知道数据时,我们研究这种解密的问题。我们有兴趣确定能够确定升动的目标数据空间的子集,以及函数的本地平滑度及其升动如何相关。