Pair-wise loss functions have been extensively studied and shown to continuously improve the performance of deep metric learning (DML). However, they are primarily designed with intuition based on simple toy examples, and experimentally identifying the truly effective design is difficult in complicated, real-world cases. In this paper, we provide a new methodology for systematically studying weighting strategies of various pair-wise loss functions, and rethink pair weighting with an embedding memory. We delve into the weighting mechanisms by decomposing the pair-wise functions, and study positive and negative weights separately using direct weight assignment. This allows us to study various weighting functions deeply and systematically via weight curves, and identify a number of meaningful, comprehensive and insightful facts, which come up with our key observation on memory-based DML: it is critical to mine hard negatives and discard easy negatives which are less informative and redundant, but weighting on positive pairs is not helpful. This results in an efficient but surprisingly simple rule to design the weighting scheme, making it significantly different from existing mini-batch based methods which design various sophisticated loss functions to weight pairs carefully. Finally, we conduct extensive experiments on three large-scale visual retrieval benchmarks, and demonstrate the superiority of memory-based DML over recent mini-batch based approaches, by using a simple contrastive loss with momentum-updated memory.
翻译:· 然而,这些功能主要是根据简单的玩具例子来深入和系统地研究各种加权功能,并找出一些有意义的、全面的和有洞察力的事实,这些事实与我们对基于记忆的DML的主要观察有关:我们为系统研究各种双向损失功能的加权战略提供了一种新的方法,并重新思考对称损失功能的加权战略,同时重新思考对称与内存的加权。我们通过分解对称功能来研究加权机制,并用直接重量分配来分别研究正负加权。这使我们能够通过重量曲线来深入和系统地研究各种加权功能,并找出若干有意义的、全面的和有洞察力的事实,这些事实与我们对基于记忆的DML公司的主要观察有关:对于挖掘硬负值和抛弃简单负值至关重要,因为这些负值较少,而且多余,但对正对正对负值的加权却无济于用。这导致设计权重计划的有效但令人惊讶的简单规则,使得它与现有的微批量法方法大不同,这些方法是精心设计各种复杂的减重对称。