Large-scale vision-language pre-training has shown impressive advances in a wide range of downstream tasks. Existing methods mainly model the cross-modal alignment by the similarity of the global representations of images and texts, or advanced cross-modal attention upon image and text features. However, they fail to explicitly learn the fine-grained semantic alignment between visual regions and textual phrases, as only global image-text alignment information is available. In this paper, we introduce LOUPE, a fine-grained semantically aLigned visiOn-langUage PrE-training framework, which learns fine-grained semantic alignment from the novel perspective of game-theoretic interactions. To efficiently compute the game-theoretic interactions, we further propose an uncertainty-aware neural Shapley interaction learning module. Experiments show that LOUPE achieves state-of-the-art on image-text retrieval benchmarks. Without any object-level human annotations and fine-tuning, LOUPE achieves competitive performance on object detection and visual grounding. More importantly, LOUPE opens a new promising direction of learning fine-grained semantics from large-scale raw image-text pairs.
翻译:现有方法主要以图像和文本全球表述的相似性或图像和文本特征的高级超时关注来模拟跨模式的交叉调整;然而,它们未能明确学习视觉区域和文字短语之间的细微语义一致,因为只有全球图像-文字校正信息才能得到。在本文中,我们引入了LOUPE,这是一个精美精美的语义对应于On-langUage PrE培训框架,从游戏-理论互动的新角度学习精细的语义调整。为高效地构建游戏-理论互动,我们进一步提议了一个具有不确定性的神经神经特征互动学习模块。实验显示LOUPE在图像-文字检索基准方面达到了最新水平。没有目标级的人文描述和微调,LOUPE在对象探测和视觉地面定位上取得了竞争性的性表现。更重要的是,LOUPE从大型图像模型中开启了一个充满希望的新方向。