In this paper, we investigate how to achieve better visual grounding with modern vision-language transformers, and propose a simple yet powerful Selective Retraining (SiRi) mechanism for this challenging task. Particularly, SiRi conveys a significant principle to the research of visual grounding, i.e., a better initialized vision-language encoder would help the model converge to a better local minimum, advancing the performance accordingly. In specific, we continually update the parameters of the encoder as the training goes on, while periodically re-initialize rest of the parameters to compel the model to be better optimized based on an enhanced encoder. SiRi can significantly outperform previous approaches on three popular benchmarks. Specifically, our method achieves 83.04% Top1 accuracy on RefCOCO+ testA, outperforming the state-of-the-art approaches (training from scratch) by more than 10.21%. Additionally, we reveal that SiRi performs surprisingly superior even with limited training data. We also extend it to transformer-based visual grounding models and other vision-language tasks to verify the validity.
翻译:在本文中,我们研究如何以现代视觉语言变压器实现更好的视觉定位,并为这项具有挑战性的任务提出简单而有力的选择性再培训机制。特别是,SiRi将一项重要原则传达给视觉定位研究,即更初始化的视觉语言编码器将有助于模型达到更好的本地最低水平,从而相应提高性能。具体地说,我们不断更新编码器参数,同时定期重新启用其余参数,以迫使模型在增强的编码器的基础上实现更好的优化。SiRi可以大大超过先前三种流行基准的方法。具体地说,我们的方法在RefCO+测试A上实现了83.04 % Top1的精度,比目前最先进的方法(从零开始的培训)高出10.21%以上。此外,我们发现Siri即使在培训数据有限的情况下也表现出惊人的优越性。我们还将它扩大到基于变异的视觉定位模型和其他视觉语言任务,以核实其有效性。