Model-Agnostic Meta-Learning (MAML), a popular gradient-based meta-learning framework, assumes that the contribution of each task or instance to the meta-learner is equal. Hence, it fails to address the domain shift between base and novel classes in few-shot learning. In this work, we propose a novel robust meta-learning algorithm, NestedMAML, which learns to assign weights to training tasks or instances. We consider weights as hyper-parameters and iteratively optimize them using a small set of validation tasks set in a nested bi-level optimization approach (in contrast to the standard bi-level optimization in MAML). We then apply NestedMAML in the meta-training stage, which involves (1) several tasks sampled from a distribution different from the meta-test task distribution, or (2) some data samples with noisy labels. Extensive experiments on synthetic and real-world datasets demonstrate that NestedMAML efficiently mitigates the effects of "unwanted" tasks or instances, leading to significant improvement over the state-of-the-art robust meta-learning methods.
翻译:在这项工作中,我们提出了一种新的强有力的元学习算法,即NestedMAML,它学会给培训任务或实例分配权重。我们把加权视为超参数,并使用嵌套双层优化办法(与MAML的标准双层优化办法不同)中设定的小规模验证任务来迭接优化它们。我们随后在元培训阶段应用NestedMAML,这涉及:(1) 从不同于元测试任务分布的分布中抽取的几项任务,或者(2) 一些带有噪音标签的数据样本。关于合成和真实世界数据集的广泛实验表明,NestemMAML有效地减轻了“不受欢迎的”任务或实例的影响,从而大大改进了州级稳健的元学习方法。