Recent advancements of image captioning have featured Visual-Semantic Fusion or Geometry-Aid attention refinement. However, those fusion-based models, they are still criticized for the lack of geometry information for inter and intra attention refinement. On the other side, models based on Geometry-Aid attention still suffer from the modality gap between visual and semantic information. In this paper, we introduce a novel Geometry-Entangled Visual Semantic Transformer (GEVST) network to realize the complementary advantages of Visual-Semantic Fusion and Geometry-Aid attention refinement. Concretely, a Dense-Cap model proposes some dense captions with corresponding geometry information at first. Then, to empower GEVST with the ability to bridge the modality gap among visual and semantic information, we build four parallel transformer encoders VV(Pure Visual), VS(Semantic fused to Visual), SV(Visual fused to Semantic), SS(Pure Semantic) for final caption generation. Both visual and semantic geometry features are used in the Fusion module and also the Self-Attention module for better attention measurement. To validate our model, we conduct extensive experiments on the MS-COCO dataset, the experimental results show that our GEVST model can obtain promising performance gains.
翻译:图像字幕最近的进展包括了视觉- 数学融合或几何辅助关注的完善。 但是,这些基于聚合的模型仍然被批评为缺乏用于内部和内部关注的完善的几何信息。 另一方面,基于几何-援助关注的模型仍然因视觉和语义信息之间的模式差异而受到影响。在本文中,我们引入了一个新型的大地测量-强化视觉语义变异器(GEVST)网络,以实现视觉- 数学融合和几何辅助关注的完善的互补优势。具体地说,一个共振动模型首先提出一些含有相应几何信息的大量内容。随后,为了增强GEVST的能力,使其能够弥合视觉和语义信息之间的模式差距,我们建立了四个平行的变异器变异器VVV(Pure)、VS(Smanicited to Vevision)、SV(视觉结合到Smantic)、SS(PureS- Smantic)网络,用于最终的字幕生成。两种视觉和语义化几何特征模型都用来在发式模块中使用了相应的几何测量学信息信息。然后,使GST实验模型能够展示我们广泛的实验成果。