Supervised transfer learning (TL) has received considerable attention because of its potential to boost the predictive power of machine learning in cases with limited data. In a conventional scenario, cross-domain differences are modeled and estimated using a given set of source models and samples from a target domain. For example, if there is a functional relationship between source and target domains, only domain-specific factors are additionally learned using target samples to shift the source models to the target. However, the general methodology for modeling and estimating such cross-domain shifts has been less studied. This study presents a TL framework that simultaneously and separately estimates domain shifts and domain-specific factors using given target samples. Assuming consistency and invertibility of the domain transformation functions, we derive an optimal family of functions to represent the cross-domain shift. The newly derived class of transformation functions takes the same form as invertible neural networks using affine coupling layers, which are widely used in generative deep learning. We show that the proposed method encompasses a wide range of existing methods, including the most common TL procedure based on feature extraction using neural networks. We also clarify the theoretical properties of the proposed method, such as the convergence rate of the generalization error, and demonstrate the practical benefits of separately modeling and estimating domain-specific factors through several case studies.
翻译:监督转让学习(TL)由于在有限数据的情况下有可能提高机器学习的预测力,因此受到相当的重视。在常规假设中,利用特定源模型和目标领域样本进行建模和估计,例如,如果源和目标领域之间有功能关系,则仅额外学习了特定领域因素,利用目标样本将源模型转换到目标,从而将源模型转换到目标,但较少研究建模和估计这种跨部变化的一般方法。本项研究提出了一个TL框架,即同时和分别利用特定目标样本估算域变化和特定领域因素。假设域变功能的一致性和不可视性,我们形成一个功能的最佳组合,以代表跨部变换。新衍生的转换功能类别采取与不可忽视的神经网络相同的形式,使用近似的连接层将源模型转换到目标。我们表明,拟议方法包含广泛的现有方法,包括使用特定目标样本同时和单独估计域变换的域变化和特定因素。我们还分别澄清了域变换功能的一致性和不可视性功能,通过具体网络将一些拟议方法的理论性指标和模型的趋同性要素,通过若干项研究,分别说明各种估计方法的理论性指标的趋同性。