Few-shot learning is an interesting and challenging study, which enables machines to learn from few samples like humans. Existing studies rarely exploit auxiliary information from large amount of unlabeled data. Self-supervised learning is emerged as an efficient method to utilize unlabeled data. Existing self-supervised learning methods always rely on the combination of geometric transformations for the single sample by augmentation, while seriously neglect the endogenous correlation information among different samples that is the same important for the task. In this work, we propose a Graph-driven Clustering (GC), a novel augmentation-free method for self-supervised learning, which does not rely on any auxiliary sample and utilizes the endogenous correlation information among input samples. Besides, we propose Multi-pretext Attention Network (MAN), which exploits a specific attention mechanism to combine the traditional augmentation-relied methods and our GC, adaptively learning their optimized weights to improve the performance and enabling the feature extractor to obtain more universal representations. We evaluate our MAN extensively on miniImageNet and tieredImageNet datasets and the results demonstrate that the proposed method outperforms the state-of-the-art (SOTA) relevant methods.
翻译:少见的学习是一种有趣的、富有挑战性的研究,它使机器能够从人类等少数样本中学习。现有的研究很少利用大量未贴标签数据的辅助信息。自我监督的学习是使用未贴标签数据的一种有效方法。现有的自我监督的学习方法总是依靠单一样本的几何转换组合,通过扩增,而严重忽视不同样本之间对任务同等重要的内在相关性信息。在这项工作中,我们提议了一个图表驱动的集群(GC),这是一种新的自我监督学习的无增强性方法,它不依赖任何辅助样本,而是利用输入样本之间的内在相关性信息。此外,我们提议了多先导关注网络(MAN),它利用一种特定的注意机制,将传统的增强后遗漏方法和我们的GC结合起来,适应性地学习其优化的权重来改进性能,使特征提取器能够获得更普遍的表述。我们在微型图像网络和分层ImageNet数据集上广泛评价我们的MAN,它不依赖任何辅助样本,而是利用输入样本之间的内在相关性信息。此外,我们提议了一个多端关注网络(MAN)网络,它利用一种特定的注意机制,将传统的增强型方法超越了州-art-SO(SO)相关方法。