Deep neural networks have achieved promising results in automatic image captioning due to their effective representation learning and context-based content generation capabilities. As a prominent type of deep features used in many of the recent image captioning methods, the well-known bottomup features provide a detailed representation of different objects of the image in comparison with the feature maps directly extracted from the raw image. However, the lack of high-level semantic information about the relationships between these objects is an important drawback of bottom-up features, despite their expensive and resource-demanding extraction procedure. To take advantage of visual relationships in caption generation, this paper proposes a deep neural network architecture for image captioning based on fusing the visual relationships information extracted from an image's scene graph with the spatial feature maps of the image. A multi-modal reward function is then introduced for deep reinforcement learning of the proposed network using a combination of language and vision similarities in a common embedding space. The results of extensive experimentation on the MSCOCO dataset show the effectiveness of using visual relationships in the proposed captioning method. Moreover, the results clearly indicate that the proposed multi-modal reward in deep reinforcement learning leads to better model optimization, outperforming several state-of-the-art image captioning algorithms, while using light and easy to extract image features. A detailed experimental study of the components constituting the proposed method is also presented.
翻译:深度神经网络由于其有效的表示学习和基于上下文的内容生成能力,在自动图像字幕生成方面取得了令人瞩目的成果。作为许多最新图像字幕方法中使用的显着的深度特征类型,众所周知的自底向上特征与从原始图像直接提取的特征映射相比,提供了有关图像中不同对象的详细表示。然而,自底向上特征的重要缺点是缺乏关于这些对象之间关系的高级语义信息,尽管其提取过程昂贵且需要资源。为了利用视觉关系进行字幕生成,本文提出了一种基于深度神经网络架构的图像字幕生成方法,该方法基于融合从图像场景图中提取的视觉关系信息和图像的空间特征映射。然后,引入了一种多模态奖励函数,用于使用通用嵌入空间中的语言和视觉相似性的组合对所提出的网络进行深度强化学习。对MSCOCO数据集的广泛实验结果表明了使用视觉关系的所提出的字幕生成方法的有效性。此外,结果明确表明,所提出的深度强化学习中的多模态奖励导致更好的模型优化,优于多个最新的图像字幕算法,同时使用轻量且易于提取的图像特征。还提供了有关构成所提出的方法的组件的详细实验研究。