Recently, a series of Image-Text Matching (ITM) methods achieve impressive performance. However, we observe that most existing ITM models suffer from gradients vanishing at the beginning of training, which makes these models prone to falling into local minima. Most ITM models adopt triplet loss with Hard Negative mining (HN) as the optimization objective. We find that optimizing an ITM model using only the hard negative samples can easily lead to gradient vanishing. In this paper, we derive the condition under which the gradient vanishes during training. When the difference between the positive pair similarity and the negative pair similarity is close to 0, the gradients on both the image and text encoders will approach 0. To alleviate the gradient vanishing problem, we propose a Selectively Hard Negative Mining (SelHN) strategy, which chooses whether to mine hard negative samples according to the gradient vanishing condition. SelHN can be plug-and-play applied to existing ITM models to give them better training behavior. To further ensure the back-propagation of gradients, we construct a Residual Visual Semantic Embedding model with SelHN, denoted as RVSE++. Extensive experiments on two ITM benchmarks demonstrate the strength of RVSE++, achieving state-of-the-art performance.
翻译:最近,一系列图像-文字匹配(ITM)方法取得了令人印象深刻的性能。然而,我们看到,大多数现有的IMT模型在培训开始时就因渐变而消失,使这些模型容易陷入本地迷你状态。大多数IMT模型采用硬负式采矿(HN)三重损失作为优化目标。我们发现,仅使用硬负式样品优化IMT模型很容易导致渐变消失。在本文中,我们得出梯度在培训期间消失的条件。当正对正对和负对相差接近接近0时,图像和文本编码器的梯度将接近0。为缓解渐变问题,我们建议采用选择性硬负式采矿(SelHN)战略,根据梯度消失条件选择是否开采硬式样品。SelHN可以对现有的IMT模型进行插接和播放,以使他们有更好的训练行为。进一步确保梯度的反向调整,我们用SelHN-SE+M的后视磁带模型,以Sel-SE-SE+V的强度基准展示RH-SER-SER-SAR-BAR的深度测试。</s>