Unsupervised meta-learning aims to learn generalizable knowledge across a distribution of tasks constructed from unlabeled data. Here, the main challenge is how to construct diverse tasks for meta-learning without label information; recent works have proposed to create, e.g., pseudo-labeling via pretrained representations or creating synthetic samples via generative models. However, such a task construction strategy is fundamentally limited due to heavy reliance on the immutable pseudo-labels during meta-learning and the quality of the representations or the generated samples. To overcome the limitations, we propose a simple yet effective unsupervised meta-learning framework, coined Pseudo-supervised Contrast (PsCo), for few-shot classification. We are inspired by the recent self-supervised learning literature; PsCo utilizes a momentum network and a queue of previous batches to improve pseudo-labeling and construct diverse tasks in a progressive manner. Our extensive experiments demonstrate that PsCo outperforms existing unsupervised meta-learning methods under various in-domain and cross-domain few-shot classification benchmarks. We also validate that PsCo is easily scalable to a large-scale benchmark, while recent prior-art meta-schemes are not.
翻译:未经监督的元学习目标是在根据未贴标签的数据构建的任务分布中学习可普遍应用的知识。 这里,主要的挑战是如何在不提供标签信息的情况下为元学习构建多种任务;最近的工作提议创建假标签,例如通过预先培训的演示或通过基因化模型创建合成样本;然而,由于在元学习过程中严重依赖不可移动的假标签以及演示或生成样本的质量,这种任务构建战略从根本上受到限制。为了克服这些限制,我们提议了一个简单而有效的、不受监督的元学习框架,即为少见的分类而创建的Pseeudo监督对照(Psco),我们受到最近自我监督的学习文献的启发; PsCo利用一个动因网络和前几批的阵列来逐步改进伪标签和构建多种任务。我们的广泛实验表明,PCo超越了在各种内部和跨部的少数分类基准下现有的未经监督的元学习方法。 我们还确认,Psodododo监督的对比(Psco)是最近一个不易进行大规模衡量的基准。</s>