Meta-learning has been the most common framework for few-shot learning in recent years. It learns the model from collections of few-shot classification tasks, which is believed to have a key advantage of making the training objective consistent with the testing objective. However, some recent works report that by training for whole-classification, i.e. classification on the whole label-set, it can get comparable or even better embedding than many meta-learning algorithms. The edge between these two lines of works has yet been underexplored, and the effectiveness of meta-learning in few-shot learning remains unclear. In this paper, we explore a simple process: meta-learning over a whole-classification pre-trained model on its evaluation metric. We observe this simple method achieves competitive performance to state-of-the-art methods on standard benchmarks. Our further analysis shed some light on understanding the trade-offs between the meta-learning objective and the whole-classification objective in few-shot learning.
翻译:近年来,元学习是少见学习的最常见框架,它从收集少见的分类任务中学习模型,据认为,这种模型具有使培训目标与测试目标相一致的关键优势,然而,最近的一些工作报告指出,通过对整个分类(即整个标签集的分类)进行培训,它可以比许多元学习算法更具有可比性,甚至更好地嵌入。这两行工作之间的边际尚未得到充分探讨,在少见的学习中,元学习的效果仍然不明确。在本文件中,我们探索了一个简单的过程:在对其评价指标的全分类预培训模式中进行元学习。我们观察到,这一简单方法在标准基准中达到了最先进的方法的竞争性业绩。我们的进一步分析有助于了解元学习目标与几出结果的全分类目标之间的利弊。