A key problem in the theory of meta-learning is to understand how the task distributions influence transfer risk, the expected error of a meta-learner on a new task drawn from the unknown task distribution. In this paper, focusing on fixed design linear regression with Gaussian noise and a Gaussian task (or parameter) distribution, we give distribution-dependent lower bounds on the transfer risk of any algorithm, while we also show that a novel, weighted version of the so-called biased regularized regression method is able to match these lower bounds up to a fixed constant factor. Notably, the weighting is derived from the covariance of the Gaussian task distribution. Altogether, our results provide a precise characterization of the difficulty of meta-learning in this Gaussian setting. While this problem setting may appear simple, we show that it is rich enough to unify the "parameter sharing" and "representation learning" streams of meta-learning; in particular, representation learning is obtained as the special case when the covariance matrix of the task distribution is unknown. For this case we propose to adopt the EM method, which is shown to enjoy efficient updates in our case. The paper is completed by an empirical study of EM. In particular, our experimental results show that the EM algorithm can attain the lower bound as the number of tasks grows, while the algorithm is also successful in competing with its alternatives when used in a representation learning context.
翻译:元学习理论中的一个关键问题是,理解任务分配如何影响转移风险,即从未知任务分配中得出的新任务分配的预期偏差。在本文中,我们侧重于使用高山噪音和高山任务(或参数)分布的固定设计线性回归,我们对任何算法(或参数)的转移风险给予依赖分配的较低界限,同时我们也表明,所谓的有偏向的常规回归方法的新颖加权版本能够将这些较低界限匹配到固定不变因素。值得注意的是,加权来自高山任务分配的变异。总的来说,我们的成果提供了对高山设置中元学习困难的精确描述。虽然这一问题设置看起来可能很简单,但我们表明,将任何算法的“参数共享”和“代议式学习”流统一起来是足够多的;特别是,当任务分配的共变矩阵未知时,代表学习是特殊的特例。我们提议采用EM方法,在高山任务分配过程中,我们建议采用这种方法,在高山任务分配过程中的元学习困难是精确的缩略图,在实验性演算中,我们的经验演进后,我们学习成绩的成绩可以显示,我们的成功的缩进的算是成功的缩数。