Model Agnostic Meta-Learning (MAML) has emerged as a standard framework for meta-learning, where a meta-model is learned with the ability of fast adapting to new tasks. However, as a double-looped optimization problem, MAML needs to differentiate through the whole inner-loop optimization path for every outer-loop training step, which may lead to both computational inefficiency and sub-optimal solutions. In this paper, we generalize MAML to allow meta-learning to be defined in function spaces, and propose the first meta-learning paradigm in the Reproducing Kernel Hilbert Space (RKHS) induced by the meta-model's Neural Tangent Kernel (NTK). Within this paradigm, we introduce two meta-learning algorithms in the RKHS, which no longer need a sub-optimal iterative inner-loop adaptation as in the MAML framework. We achieve this goal by 1) replacing the adaptation with a fast-adaptive regularizer in the RKHS; and 2) solving the adaptation analytically based on the NTK theory. Extensive experimental studies demonstrate advantages of our paradigm in both efficiency and quality of solutions compared to related meta-learning algorithms. Another interesting feature of our proposed methods is that they are demonstrated to be more robust to adversarial attacks and out-of-distribution adaptation than popular baselines, as demonstrated in our experiments.
翻译:模型元数据学习(MAML)已经成为元学习的标准框架,通过这种模式学习元模,能够快速适应新的任务。然而,作为一个双层优化问题,MAML需要通过每个外环培训步骤的整个内环优化路径来区分,这可能导致计算效率低下和亚最佳解决方案。在本文中,我们将MAML普遍化,允许在功能空间中定义元学习,并提出由元模型Neal Tangnel(NTKK)引领的Recent Kernel Hilbert空间(RKHS)的第一个元学习模式。在这个模式中,我们在RKHS中引入了两种元学习算法,不再像MAML框架那样需要亚于最佳的迭代内环适应。我们实现这一目标的方式是:(1) 以RKHS中快速适应的正规化器取代适应;和(2) 以NTK理论为基础解决适应的分析分析模式。