Few-shot learning aims to build classifiers for new classes from a small number of labeled examples and is commonly facilitated by access to examples from a distinct set of 'base classes'. The difference in data distribution between the test set (novel classes) and the base classes used to learn an inductive bias often results in poor generalization on the novel classes. To alleviate problems caused by the distribution shift, previous research has explored the use of unlabeled examples from the novel classes, in addition to labeled examples of the base classes, which is known as the transductive setting. In this work, we show that, surprisingly, off-the-shelf self-supervised learning outperforms transductive few-shot methods by 3.9% for 5-shot accuracy on miniImageNet without using any base class labels. This motivates us to examine more carefully the role of features learned through self-supervision in few-shot learning. Comprehensive experiments are conducted to compare the transferability, robustness, efficiency, and the complementarity of supervised and self-supervised features.
翻译:少见的学习旨在从为数不多的标签实例中为新类建立分类,并且通常通过获取一组不同的“基础类”的例子而得到便利。测试集(小类)和用于学习感化偏差的基础类之间的数据分布差异往往导致小类的感应偏差。为了减轻分配变化造成的问题,以前的研究探索了使用新类中未加标签的例子,此外还有被称为传输设置的基类的标签例子。在这项工作中,我们发现,出乎意料的是,在不使用任何基本类标签的情况下,通过3.9%的方法对微型IMageNet的5发精度进行现式微缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩图。这促使我们更仔细地研究通过微小片的自我监控学习所学到的特征的作用。进行了全面实验,以比较受监管和自我监督的特征的可转移性、稳健性、效率以及互补性。