This paper proposes a novel self-supervised learning method for learning better representations with small batch sizes. Many self-supervised learning methods based on certain forms of the siamese network have emerged and received significant attention. However, these methods need to use large batch sizes to learn good representations and require heavy computational resources. We present a new triplet network combined with a triple-view loss to improve the performance of self-supervised representation learning with small batch sizes. Experimental results show that our method can drastically outperform state-of-the-art self-supervised learning methods on several datasets in small-batch cases. Our method provides a feasible solution for self-supervised learning with real-world high-resolution images that uses small batch sizes.
翻译:本文提出了一种新的自我监督学习方法,用小批量来学习更好的表现方式。许多基于某类诊断网络某些形式的自我监督学习方法已经出现并受到极大关注。然而,这些方法需要使用大批量体积来学习良好的表现方式并需要大量的计算资源。我们提出了一个新的三重体积网络,加上三重视损失,以提高以小批量进行自我监督的介绍学习的绩效。实验结果显示,我们的方法可以大大超过一些小批量数据集方面的最先进的自我监督学习方法。我们的方法为使用小批量体积的真实世界高分辨率图像进行自我监督学习提供了一个可行的解决方案。