Meta-learning refers to the process of abstracting a learning rule for a class of tasks through a meta-parameter that captures the inductive bias for the class. The metaparameter is used to achieve a fast adaptation to unseen tasks from the class, given a few training samples. While meta-learning implicitly assumes the tasks as being similar, it is generally unclear how this similarity could be quantified. Further, many of the popular meta-learning approaches do not actively use such a task-similarity in solving for the tasks. In this paper, we propose the task-similarity aware nonparameteric meta-learning algorithm that explicitly employs similarity/dissimilarity between tasks using nonparametric kernel regression. Our approach models the task-specific parameters to lie in a reproducing kernel Hilbert space, wherein the kernel function captures the similarity across tasks. The proposed algorithm iteratively learns a meta-parameter which is used to assign a task-specific descriptor for every task. The task descriptors are then used to quantify the similarity through the kernel function. We show how our approach generalizes the popular meta-learning approaches of model-agnostic meta-learning (MAML) and Meta-stochastic gradient descent (Meta-SGD) approaches. Numerical experiments with regression tasks show that our algorithm performs well even in the presence of outlier or dissimilar tasks, validating the proposed approach
翻译:元数据学习是指为某类任务制定学习规则的抽象过程,通过一个元参数为某类任务制定学习规则,该元参数可以捕捉该类任务的感化偏差。该元参数用于迅速适应班级的隐蔽任务,因为有一些培训样本。虽然元数据学习隐含着任务相似,但通常不清楚这种相似性如何量化。此外,许多流行的元数据学习方法在解决任务时并不积极使用这种任务相似性。在本文件中,我们提议任务相似性意识的非单数元学习算法,这种算法明确使用非对等内核回归的方法在任务之间使用类似/差异。我们的方法模型模型模型中,我们的方法将任务的具体参数用于再生产内核Hilbert空间,而内核函数则反映任务之间的相似性。提议的迭代法计算法学方法用于为每项任务指定一个特定任务的说明性描述符。然后,任务描述用于通过内核函数来量化相似性的类似性非参数。我们的方法是,我们的方法是将普通的MLMLA-M-M-M-M-M-M-M-M-M-M-M-M-M-M-M-M-M-M-M-M-M-M-M-M-M-M-C-C-N-M-M-M-M-M-M-M-M-M-M-M-M-M-M-M-M-M-M-M-M-M-M-M-M-M-M-M-M-S-M-M-M-M-M-M-M-M-M-M-M-M-M-M-M-M-M-M-M-M-M-M-M-M-M-M-M-M-M-M-M-M-M-M-M-M-M-M-M-M-M-M-M-M-M-M-C-M-M-M-M-M-M-M-M-M-M-M-M-M-M-M-M-M-M-M-M-M-M-M-M-M-M-M-M-M-M-M-M-M-M-M-M-M