Contrastive learning is commonly applied to self-supervised learning, and has been shown to outperform traditional approaches such as the triplet loss and N-pair loss. However, the requirement of large batch sizes and memory banks has made it difficult and slow to train. Recently, Supervised Contrasative approaches have been developed to overcome these problems. They focus more on learning a good representation for each class individually, or between a cluster of classes. In this work we attempt to rank classes based on similarity using a user-defined ranking, to learn an efficient representation between all classes. We observe how incorporating human bias into the learning process could improve learning representations in the parameter space. We show that our results are comparable to Supervised Contrastive Learning for image classification and object detection, and discuss it's shortcomings in OOD Detection
翻译:相互抵触的学习通常适用于自我监督的学习,事实证明,这种学习优于传统方法,如三重损失和N-pair损失。然而,大量批量规模和记忆库的要求使得培训变得困难和缓慢。最近,为解决这些问题,制定了监督的相互抵触的方法。它们更侧重于学习每个班级或一组班级之间的良好代表性。在这项工作中,我们试图利用用户定义的排名对相似的班级进行排序,学习所有班级之间的有效代表性。我们观察将人的偏见纳入学习过程如何改善参数空间的学习代表性。我们显示,我们的结果与用于图像分类和对象探测的 " 监督对抗学习 " 相似,并讨论在OOD探测方面的缺点。