Contrastive self-supervised learning methods learn to map data points such as images into non-parametric representation space without requiring labels. While highly successful, current methods require a large amount of data in the training phase. In situations where the target training set is limited in size, generalization is known to be poor. Pretraining on a large source data set and fine-tuning on the target samples is prone to overfitting in the few-shot regime, where only a small number of target samples are available. Motivated by this, we propose a domain adaption method for self-supervised contrastive learning, termed Few-Max, to address the issue of adaptation to a target distribution under few-shot learning. To quantify the representation quality, we evaluate Few-Max on a range of source and target datasets, including ImageNet, VisDA, and fastMRI, on which Few-Max consistently outperforms other approaches.
翻译:自我监督的自我监督学习方法学会在不需要标签的情况下将图像等数据点映射成非参数代表空间,虽然非常成功,但目前的方法在培训阶段需要大量数据。在目标培训组规模有限的情况下,一般化程度已知很差。对大型源数据集的预先培训和对目标样本的微调很容易在少见的系统里过度适应,因为那里只有少量的目标样本。为此,我们提议了一种域适应方法,用于自我监督的对比学习,称为“微量-最大”系统,以便在少见的学习中解决适应目标分布的问题。为了量化代表质量,我们评估了包括图像网络、VisDA和快速MRI在内的一系列源和目标数据集中的微量最大值,而很少的元数据始终优于其他方法。