Meta-learning is a popular framework for learning with limited data in which an algorithm is produced by training over multiple few-shot learning tasks. For classification problems, these tasks are typically constructed by sampling a small number of support and query examples from a subset of the classes. While conventional wisdom is that task diversity should improve the performance of meta-learning, in this work we find evidence to the contrary: we propose a modification to traditional meta-learning approaches in which we keep the support sets fixed across tasks, thus reducing task diversity. Surprisingly, we find that not only does this modification not result in adverse effects, it almost always improves the performance for a variety of datasets and meta-learning methods. We also provide several initial analyses to understand this phenomenon. Our work serves to: (i) more closely investigate the effect of support set construction for the problem of meta-learning, and (ii) suggest a simple, general, and competitive baseline for few-shot learning.
翻译:元学习是利用有限数据进行学习的流行框架,在这种框架内,通过对多种微小的学习任务进行培训产生算法。关于分类问题,这些任务通常是通过抽样从一组类中抽取少量的支持和查询实例来构建的。传统智慧是任务的多样性应该提高元学习的绩效,但在这项工作中,我们发现相反的证据:我们建议修改传统的元学习方法,在其中,我们把支助组合固定在各项任务之间,从而减少任务的多样性。令人惊讶的是,我们发现这种修改不仅不会造成负面影响,而且几乎总是改进各种数据集和元学习方法的性能。我们还提供几种初步分析,以了解这一现象。我们的工作旨在:(一) 更密切地调查为元学习问题构筑支持结构的效果,以及(二) 为少数光学而提出简单、一般和竞争性的基线。