In the case of an imbalance between positive and negative samples, hard negative mining strategies have been shown to help models learn more subtle differences between positive and negative samples, thus improving recognition performance. However, if too strict mining strategies are promoted in the dataset, there may be a risk of introducing false negative samples. Meanwhile, the implementation of the mining strategy disrupts the difficulty distribution of samples in the real dataset, which may cause the model to over-fit these difficult samples. Therefore, in this paper, we investigate how to trade off the difficulty of the mined samples in order to obtain and exploit high-quality negative samples, and try to solve the problem in terms of both the loss function and the training strategy. The proposed balance loss provides an effective discriminant for the quality of negative samples by combining a self-supervised approach to the loss function, and uses a dynamic gradient modulation strategy to achieve finer gradient adjustment for samples of different difficulties. The proposed annealing training strategy then constrains the difficulty of the samples drawn from negative sample mining to provide data sources with different difficulty distributions for the loss function, and uses samples of decreasing difficulty to train the model. Extensive experiments show that our new descriptors outperform previous state-of-the-art descriptors for patch validation, matching, and retrieval tasks.
翻译:在正式和负式抽样之间不平衡的情况下,已经展示了硬式负面采矿战略,以帮助模型了解正式和负式抽样之间更微妙的差别,从而改善确认业绩;然而,如果在数据集中提倡过于严格的采矿战略,则可能存在引入虚假负面样品的风险;同时,采矿战略的实施干扰了在真实数据集中分配样品的困难,这可能导致模型过分适合这些困难样品;因此,在本文件中,我们调查如何权衡开采样品的难度,以便获取和利用高质量的负面样品,并努力解决损失功能和培训战略两方面的问题;拟议的平衡损失通过将自我监督的方法与损失功能相结合,为负式样品的质量提供了有效的差异;采用动态梯度调整战略,对不同困难样品进行更精细的梯度调整;因此,我们提出的一项培训战略将限制从负式采样中提取的样品为损失功能提供不同的数据源的困难,并使用减少难度的样品,用于培训模型的校正、深度校正和校正。</s>