One of the challenges in contrastive learning is the selection of appropriate \textit{hard negative} examples, in the absence of label information. Random sampling or importance sampling methods based on feature similarity often lead to sub-optimal performance. In this work, we introduce UnReMix, a hard negative sampling strategy that takes into account anchor similarity, model uncertainty and representativeness. Experimental results on several benchmarks show that UnReMix improves negative sample selection, and subsequently downstream performance when compared to state-of-the-art contrastive learning methods.
翻译:对比性学习的挑战之一是在没有标签信息的情况下选择适当的\ textit{ 硬性负差} 实例。 随机抽样或基于特征相似的重要抽样方法往往导致低于最佳的性能。 在这项工作中,我们引入了UnRemix(UnRemix)(UnRemix)(Unremmix))(Unremmix)(Unremix)(Unremmix))(Unremmix)(Unremix)(Unremmmmission)(Unremission)(Unremmix)(UNremmix)(UNremmix)(UNremmix)(UNregal ) (Informatic commissional scractions) (Unitedal) (United the stature of the state of the state of state of the state of fest