Few-shot learning aims to adapt knowledge learned from previous tasks to novel tasks with only a limited amount of labeled data. Research literature on few-shot learning exhibits great diversity, while different algorithms often excel at different few-shot learning scenarios. It is therefore tricky to decide which learning strategies to use under different task conditions. Inspired by the recent success in Automated Machine Learning literature (AutoML), in this paper, we present Meta Navigator, a framework that attempts to solve the aforementioned limitation in few-shot learning by seeking a higher-level strategy and proffer to automate the selection from various few-shot learning designs. The goal of our work is to search for good parameter adaptation policies that are applied to different stages in the network for few-shot classification. We present a search space that covers many popular few-shot learning algorithms in the literature and develop a differentiable searching and decoding algorithm based on meta-learning that supports gradient-based optimization. We demonstrate the effectiveness of our searching-based method on multiple benchmark datasets. Extensive experiments show that our approach significantly outperforms baselines and demonstrates performance advantages over many state-of-the-art methods. Code and models will be made publicly available.
翻译:少见的学习旨在将以往任务中学到的知识改造成新任务,只有有限的标签数据。关于少见的学习的研究文献显示了巨大的多样性,而不同的算法往往在不同的少见的学习场景中优异。因此,决定在不同任务条件下使用哪些学习战略是困难的。本文介绍Meta Navigator,这是最近自动化机器学习文献(Automle)的成功激励下的一个框架,它试图通过寻求更高层次的战略和从各种少见的学习设计中自动选取一些短小的学习,来解决在短小的学习中存在的上述限制。我们的工作目标是寻找适用于网络不同阶段的好参数调整政策,进行少见的分类。我们提出了一个搜索空间,涵盖文献中许多受欢迎的少见的学习算法,并根据支持基于梯度优化的元学习发展一种不同的搜索和解码算法。我们展示了在多个基准数据集上的搜索方法的有效性。广泛的实验显示,我们的方法大大超越了基准基线,并展示了在多种状态方法上的业绩优势。