This work studies the problem of transfer learning under the functional linear regression model framework, which aims to improve the estimation and prediction of the target model by leveraging the information from related source models. We measure the relatedness between target and source models using Reproducing Kernel Hilbert Spaces (RKHS) norm, allowing the type of information being transferred to be interpreted by the structural properties of the spaces. Two transfer learning algorithms are proposed: one transfers information from source tasks when we know which sources to use, while the other one aggregates multiple transfer learning results from the first algorithm to achieve robust transfer learning without prior information about the sources. Furthermore, we establish the optimal convergence rates for the prediction risk in the target model, making the statistical gain via transfer learning mathematically provable. The theoretical analysis of the prediction risk also provides insights regarding what factors are affecting the transfer learning effect, i.e. what makes source tasks useful to the target task. We demonstrate the effectiveness of the proposed transfer learning algorithms on extensive synthetic data as well as real financial data application.
翻译:这项工作研究功能线性回归模型框架下的转移学习问题,目的是通过利用相关源模型的信息,改进目标模型的估算和预测。我们利用Recing Kernel Hilbert Spaces(RKHS)规范衡量目标模型与源模型之间的关系,允许根据空间的结构特性解释所传输的信息类型。提出了两种转移学习算法:一种是当我们知道哪些来源使用时从源任务中传递信息,而另一种是汇总从第一个算法中获取的多重转移学习结果,以在没有关于来源的事先信息的情况下实现稳健的转移学习。此外,我们为目标模型中的预测风险建立了最佳趋同率,通过从数学学转移获取统计收益。预测风险的理论分析还提供了关于哪些因素影响转移学习效果,即哪些因素使源任务对目标任务有用。我们展示了在广泛合成数据和实际金融数据应用方面拟议的转移学习算法的有效性。