This paper investigates the use of nonparametric kernel-regression to obtain a tasksimilarity aware meta-learning algorithm. Our hypothesis is that the use of tasksimilarity helps meta-learning when the available tasks are limited and may contain outlier/ dissimilar tasks. While existing meta-learning approaches implicitly assume the tasks as being similar, it is generally unclear how this task-similarity could be quantified and used in the learning. As a result, most popular metalearning approaches do not actively use the similarity/dissimilarity between the tasks, but rely on availability of huge number of tasks for their working. Our contribution is a novel framework for meta-learning that explicitly uses task-similarity in the form of kernels and an associated meta-learning algorithm. We model the task-specific parameters to belong to a reproducing kernel Hilbert space where the kernel function captures the similarity across tasks. The proposed algorithm iteratively learns a meta-parameter which is used to assign a task-specific descriptor for every task. The task descriptors are then used to quantify the task-similarity through the kernel function. We show how our approach conceptually generalizes the popular meta-learning approaches of model-agnostic meta-learning (MAML) and Meta-stochastic gradient descent (Meta-SGD) approaches. Numerical experiments with regression tasks show that our algorithm outperforms these approaches when the number of tasks is limited, even in the presence of outlier or dissimilar tasks. This supports our hypothesis that task-similarity helps improve the metalearning performance in task-limited and adverse settings.
翻译:本文调查的是使用非对等内核反向法以获得任务相似性/不同性认识元学习算法。 我们的假设是,任务相似性的使用有助于当可用任务有限且可能包含外部/不同任务时的元学习。 虽然现有的元学习方法暗含地承担任务相似性, 但通常不清楚如何量化和在学习中使用这种任务相似性。 因此, 大多数受欢迎的金属学习方法并不积极使用任务之间的相似性/不同性, 但要依靠为它们的工作提供大量的任务。 我们的贡献是一个新颖的元学习框架, 明确使用内核形式和相关的元学习算法形式的任务相似性。 我们建模特定任务参数属于再生产内核Hilbert空间, 其中内核函数可以捕捉任务相似性。 提议的迭代算法学习一个元性对称, 用于为每一项任务指定一个任务特定的描述性描述性, 但它支持每项任务。 我们的任务描述器随后被用来通过普通内值变平级算方法量化任务相似性任务相似性( IMLML- IML- IML- IML- IML- IML- IML- IML- ta 如何展示我们如何在普通学习方法中改进了我们的任务模式- 。