The core of deep metric learning (DML) involves learning visual similarities in high-dimensional embedding space. One of the main challenges is to generalize from seen classes of training data to unseen classes of test data. Recent works have focused on exploiting past embeddings to increase the number of instances for the seen classes. Such methods achieve performance improvement via augmentation, while the strong focus on seen classes still remains. This can be undesirable for DML, where training and test data exhibit entirely different classes. In this work, we present a novel training strategy for DML called MemVir. Unlike previous works, MemVir memorizes both embedding features and class weights to utilize them as additional virtual classes. The exploitation of virtual classes not only utilizes augmented information for training but also alleviates a strong focus on seen classes for better generalization. Moreover, we embed the idea of curriculum learning by slowly adding virtual classes for a gradual increase in learning difficulty, which improves the learning stability as well as the final performance. MemVir can be easily applied to many existing loss functions without any modification. Extensive experimental results on famous benchmarks demonstrate the superiority of MemVir over state-of-the-art competitors. Code of MemVir is publicly available.
翻译:深入的衡量学习核心(DML)涉及在高维嵌入空间学习视觉相似性。主要挑战之一是从可见的培训数据类别向看不见的测试数据类别推广。最近的工作重点是利用以往的嵌入来增加被观察课程的事例数量。这些方法通过扩增实现性能改进,同时仍然对所见课程的大力关注。这对于DML来说可能是不可取的,因为培训和测试数据展示了完全不同的课程。在这项工作中,我们为DML提出了一个叫MemVir的新颖的培训战略。与以前的工作不同,MemVir的回忆录既嵌入了更多的功能,又将班级重量作为额外的虚拟课程加以利用。虚拟课程的利用不仅利用了更多的培训信息,而且减轻了对所见课程的大力关注,以更好地概括化。此外,我们把课程学习理念包含在内,通过缓慢增加虚拟课程来逐步增加学习困难,从而改善学习稳定性和最后的成绩。MemVir可以很容易在不作任何修改的情况下应用许多现有的损失功能。关于著名的MemVart Commitr的高级竞争者在状态之上。