Optimising the approximation of Average Precision (AP) has been widely studied for image retrieval. Such methods consider both negative and positive instances ranking before each positive instance. However, we claim that only penalizing negative instances before positive ones is enough, because the loss only comes from them. To this end, we propose a novel loss, namely Penalizing Negative instances before Positive ones (PNP), which directly minimizes the number of negative instances before each positive one. Meanwhile, AP-based methods adopt a sub-optimal gradient assignment strategy. We systematically investigate different gradient assignment solutions via constructing derivative functions of the loss, resulting in PNP-I with increasing derivative functions and PNP-D with decreasing ones. PNP-I focuses more on the hard positive instances by assigning larger gradients to them and tries to make all relevant instances closer. In contrast, considering such instances may belong to another center of the corresponding category, PNP-D pays less attention to such instances and keeps them as they were. For most real-world data, one class usually contains several local clusters. Thus, PNP-D is more suitable for such situation. Experiments on three standard retrieval datasets show consistent results of the above analysis. Extensive evaluations demonstrate that PNP-D achieves the state-of-the-art performance.
翻译:优化平均精确度的近似值(AP)已经为图像检索进行了广泛的研究。这些方法考虑到每个正实例之前的负值和正值排名。然而,我们声称,仅仅在正实例之前惩罚负面实例就足够了,因为损失只是由它们造成的。为此,我们提出新的损失,即惩罚正实例之前的负值实例,直接将每个正实例之前的负数减少到最低。同时,基于AP的方法采用亚最佳梯度分配战略。我们系统地调查不同的梯度分配解决方案,方法是建立损失的衍生功能,导致PNP-I增加衍生功能,PNP-D减少功能导致PNP-I更适合这种情况。PNP-I更侧重于硬的正面实例,为它们指定更大的梯度,并试图使所有相关实例更加接近。相反,考虑到这种情况可能属于相应类别的另一中心,PNP-D对此类实例的关注较少,并保持原有的不变。对于大多数真实世界数据,一个类别通常包含若干本地组。因此,PNP-D更适合这种情况。对三种标准检索数据进行实验,显示上面的准确性分析结果。