Few-shot image classification aims to classify unseen classes with limited labelled samples. Recent works benefit from the meta-learning process with episodic tasks and can fast adapt to class from training to testing. Due to the limited number of samples for each task, the initial embedding network for meta-learning becomes an essential component and can largely affect the performance in practice. To this end, most of the existing methods highly rely on the efficient embedding network. Due to the limited labelled data, the scale of embedding network is constrained under a supervised learning(SL) manner which becomes a bottleneck of the few-shot learning methods. In this paper, we proposed to train a more generalized embedding network with self-supervised learning (SSL) which can provide robust representation for downstream tasks by learning from the data itself. We evaluate our work by extensive comparisons with previous baseline methods on two few-shot classification datasets ({\em i.e.,} MiniImageNet and CUB) and achieve better performance over baselines. Tests on four datasets in cross-domain few-shot learning classification show that the proposed method achieves state-of-the-art results and further prove the robustness of the proposed model. Our code is available at \hyperref[https://github.com/phecy/SSL-FEW-SHOT.]{https://github.com/phecy/SSL-FEW-SHOT.}
翻译:少见的图像分类旨在用有限的标签样本对隐蔽类进行分类。最近的工作得益于带有附带任务的元学习过程,能够迅速适应从培训到测试的班级。由于每项任务样本数量有限,初始的元学习嵌入网络成为基本组成部分,并可能在很大程度上影响实际业绩。为此,大多数现有方法高度依赖高效嵌入网络。由于标签数据有限,嵌入网络的规模在受监督的学习(SL)方式下受到限制,这种方式成为了少见学习方法的瓶颈。在本文件中,我们提议培训一个更加普及的嵌入网络,采用自我监督的学习方式(SSL),通过从数据本身学习,为下游任务提供强有力的代表性。我们评估我们的工作时,广泛比较了以前关于两个几发分类数据集的基线方法(即:MiniImageNet和CUB),并在基线上取得更好的业绩。四套数据集的交叉多发式学习分类测试显示,拟议的方法在SHO-HO-HO-HO-O-AW-AW_BAR_BS_BS_BAR_BAR_进一步证明拟议的代码。