Meta-learning has become a practical approach towards few-shot image classification, where "a strategy to learn a classifier" is meta-learned on labeled base classes and can be applied to tasks with novel classes. We remove the requirement of base class labels and learn generalizable embeddings via Unsupervised Meta-Learning (UML). Specifically, episodes of tasks are constructed with data augmentations from unlabeled base classes during meta-training, and we apply embedding-based classifiers to novel tasks with labeled few-shot examples during meta-test. We observe two elements play important roles in UML, i.e., the way to sample tasks and measure similarities between instances. Thus we obtain a strong baseline with two simple modifications -- a sufficient sampling strategy constructing multiple tasks per episode efficiently together with a semi-normalized similarity. We then take advantage of the characteristics of tasks from two directions to get further improvements. First, synthesized confusing instances are incorporated to help extract more discriminative embeddings. Second, we utilize an additional task-specific embedding transformation as an auxiliary component during meta-training to promote the generalization ability of the pre-adapted embeddings. Experiments on few-shot learning benchmarks verify that our approaches outperform previous UML methods and achieve comparable or even better performance than its supervised variants.
翻译:元学习已成为一种实用的方法,用于微小图像分类,其中“学习分类员的战略”是在标签基类上的元学习,可以应用于新类的任务。我们取消了基础类标签的要求,通过无人监督的元学习学习(UML)学习一般嵌入内容。具体地说,在元培训期间,通过无标签基类的数据增强来构建任务,并将基于嵌入的分类员用于带有标签的微小例子的新任务。我们观察到两个元素在UML中发挥着重要作用,即抽样任务和衡量不同实例相似之处的方法。因此,我们获得了一个强有力的基线,有两种简单的修改 -- -- 一种充分的抽样战略,高效率地每集成多个任务,同时采用半标准化的相似性。我们然后利用两个方向的任务特点来进一步改进。首先,综合的混乱情形有助于提取更具有歧视性的嵌入。第二,我们利用额外的特定任务嵌入转换作为元培训的辅助部分,即抽样任务和衡量不同实例之间的相似性。我们用一个强大的基准,即充分的抽样战略,即每集成一个比我们以往的变式实验方法更精确地验证。