Few shot learning aims to solve the data scarcity problem. If there is a domain shift between the test set and the training set, their performance will decrease a lot. This setting is called Cross-domain few-shot learning. However, this is very challenging because the target domain is unseen during training. Thus we propose a new setting some unlabelled data from the target domain is provided, which can bridge the gap between the source domain and the target domain. A benchmark for this setting is constructed using DomainNet \cite{peng2018oment}. We come up with a self-supervised learning method to fully utilize the knowledge in the labeled training set and the unlabelled set. Extensive experiments show that our methods outperforms several baseline methods by a large margin. We also carefully design an episodic training pipeline which yields a significant performance boost.
翻译:少几个拍摄的学习旨在解决数据稀缺问题。 如果测试集与培训集之间发生域变, 它们的性能会大大降低。 这个设置被称为交叉域块的几张照片学习。 但是, 这非常具有挑战性, 因为目标域在训练期间是看不见的。 因此, 我们建议从目标域中设置一个新的设置一些未贴标签的数据, 从而可以缩小源域和目标域之间的差距。 这个设置的基准是使用 DomainNet\ cite{peng2018oment} 构建的。 我们想出一种自我监督的学习方法, 以充分利用标签训练集和未贴标签的数据集中的知识。 广泛的实验显示, 我们的方法大大优于几种基线方法 。 我们还仔细设计了一条可产生显著性能提升的模拟训练管道 。