Generating natural sentences from images is a fundamental learning task for visual-semantic understanding in multimedia. In this paper, we propose to apply dual attention on pyramid image feature maps to fully explore the visual-semantic correlations and improve the quality of generated sentences. Specifically, with the full consideration of the contextual information provided by the hidden state of the RNN controller, the pyramid attention can better localize the visually indicative and semantically consistent regions in images. On the other hand, the contextual information can help re-calibrate the importance of feature components by learning the channel-wise dependencies, to improve the discriminative power of visual features for better content description. We conducted comprehensive experiments on three well-known datasets: Flickr8K, Flickr30K and MS COCO, which achieved impressive results in generating descriptive and smooth natural sentences from images. Using either convolution visual features or more informative bottom-up attention features, our composite captioning model achieves very promising performance in a single-model mode. The proposed pyramid attention and dual attention methods are highly modular, which can be inserted into various image captioning modules to further improve the performance.
翻译:从图像中产生自然句子是多媒体视觉-语义理解的基本学习任务。 在本文中,我们提议对金字塔图像特征图进行双重关注,以充分探索视觉-语义相关关系并改进生成句子的质量。 具体地说,在充分考虑了RNN控制器隐藏状态提供的背景资料后,金字塔关注可以更好地将图像中的视觉指示和语义一致区域本地化。 另一方面,背景信息可以通过学习频道-明智的依赖性来帮助重新校正特征组成部分的重要性,从而改进视觉特征的差别性能以更好地描述内容。 我们对三个众所周知的数据集(Flick88K、Flick30K和MS COCO)进行了全面实验,这些数据集在从图像中生成描述性和平稳的自然句子方面取得了令人印象深刻的成果。 我们的综合字幕模型使用演化视觉特征或信息性更丰富的底部关注特征,在单一模型模式下取得了非常有希望的性能。 拟议的金字型关注和双重关注方法是高度模块化的,可以插入各种图像字幕模块以进一步改进。