Self-supervised learning is a growing paradigm in deep representation learning, showing great generalization capabilities and competitive performance in low-labeled data regimes. The SimCLR framework proposes the NT-Xent loss for contrastive representation learning. The objective of the loss function is to maximize agreement, similarity, between sampled positive pairs. This short paper derives and proposes an upper bound for the loss and average similarity. An analysis of the implications is however not provided, but we strongly encourage anyone in the field to conduct this.
翻译:自我监督的学习是深层代表性学习的一个日益增强的范例,在低标签数据制度中显示了高度的概括能力和竞争性表现。SimCLR框架为对比性代表性学习提出了NT-X损失。损失功能的目标是使抽样的正对之间最大程度的一致和相似性。这份简短的论文提出并提出了损失和平均相似性的上限。然而,没有提供有关影响的分析,但我们强烈鼓励外地的任何人这样做。