We study the transfer learning process between two linear regression problems. An important and timely special case is when the regressors are overparameterized and perfectly interpolate their training data. We examine a parameter transfer mechanism whereby a subset of the parameters of the target task solution are constrained to the values learned for a related source task. We analytically characterize the generalization error of the target task in terms of the salient factors in the transfer learning architecture, i.e., the number of examples available, the number of (free) parameters in each of the tasks, the number of parameters transferred from the source to target task, and the correlation between the two tasks. Our non-asymptotic analysis shows that the generalization error of the target task follows a two-dimensional double descent trend (with respect to the number of free parameters in each of the tasks) that is controlled by the transfer learning factors. Our analysis points to specific cases where the transfer of parameters is beneficial as a substitute for extra overparameterization (i.e., additional free parameters in the target task). Specifically, we show that the usefulness of a transfer learning setting is fragile and depends on a delicate interplay among the set of transferred parameters, the relation between the tasks, and the true solution. We also demonstrate that overparameterized transfer learning is not necessarily more beneficial when the source task is closer or identical to the target task.
翻译:我们研究的是两个线性回归问题的转移学习过程。一个重要而及时的特殊情况是,递减者过分分解,并且完全将其培训数据相互调试。我们研究一个参数转移机制,据此,目标任务解决方案的一组参数受相关源任务所学值的限制。我们分析地从转移学习结构的突出因素的角度来分析目标任务的一般错误,即现有实例的数量、每项任务中的(免费)参数数目、从源到目标任务的参数数目以及两项任务之间的相互关系。我们的非补救性分析表明,目标任务的一般错误遵循由转移学习因素控制的二维双向下降趋势(即每项任务中自由参数的数目)。我们的分析指出,转让参数有助于替代超标度调整(即目标任务中的额外自由参数)的具体案例,我们表明,从源到目标转移的参数之间,转移的实用性错误是脆弱的,而且取决于目标任务之间的微妙的双向下降趋势(在每项任务中自由参数的数量方面),在转移任务之间,我们所转让的更接近的参数是更接近的。