Few-shot learning (FSL) aims to train a strong classifier using limited labeled examples. Many existing works take the meta-learning approach, sampling few-shot tasks in turn and optimizing the few-shot learner's performance on classifying the query examples. In this paper, we point out two potential weaknesses of this approach. First, the sampled query examples may not provide sufficient supervision for the few-shot learner. Second, the effectiveness of meta-learning diminishes sharply with increasing shots (i.e., the number of training examples per class). To resolve these issues, we propose a novel objective to directly train the few-shot learner to perform like a strong classifier. Concretely, we associate each sampled few-shot task with a strong classifier, which is learned with ample labeled examples. The strong classifier has a better generalization ability and we use it to supervise the few-shot learner. We present an efficient way to construct the strong classifier, making our proposed objective an easily plug-and-play term to existing meta-learning based FSL methods. We validate our approach in combinations with many representative meta-learning methods. On several benchmark datasets including miniImageNet and tiredImageNet, our approach leads to a notable improvement across a variety of tasks. More importantly, with our approach, meta-learning based FSL methods can consistently outperform non-meta-learning based ones, even in a many-shot setting, greatly strengthening their applicability.
翻译:少见的学习( FSL) 旨在用有限的标签实例训练一个强大的分类员。 许多现有的工作都采用元学习方法,对少见的任务进行抽样抽样,并优化少见的学习者在对查询实例进行分类方面的业绩。 在本文件中,我们指出这一方法的两个潜在弱点。 首先,抽样的查询示例可能无法为少见的学习者提供足够的监督。 其次,元学习的有效性随着射击的增加而急剧下降(即每班培训案例的数量)。为了解决这些问题,我们提议了一个新目标,直接培训少见的学习者,以便像一个强大的分类员那样,对少见的学习者进行直接培训。具体地说,我们把每个少见的少见的学习者的工作与一个强有力的分类员在对查询实例进行分类方面的业绩优化。 强大的分类示例实例可能无法为少见的学习者提供足够的监督。 我们提出了一个高效的方法来构建强的分类员,使我们拟议的目标能够大大地成为基于现有基于FSFSL方法的一个插入和播放的术语。 我们用许多具有代表性的元学习方法来验证我们的组合方法, 包括更稳定的网化的改进方法, 。