The mainstream image captioning models rely on Convolutional Neural Network (CNN) image features to generate captions via recurrent models. Recently, image scene graphs have been used to augment captioning models so as to leverage their structural semantics, such as object entities, relationships and attributes. Several studies have noted that the naive use of scene graphs from a black-box scene graph generator harms image captioning performance and that scene graph-based captioning models have to incur the overhead of explicit use of image features to generate decent captions. Addressing these challenges, we propose \textbf{SG2Caps}, a framework that utilizes only the scene graph labels for competitive image captioning performance. The basic idea is to close the semantic gap between the two scene graphs - one derived from the input image and the other from its caption. In order to achieve this, we leverage the spatial location of objects and the Human-Object-Interaction (HOI) labels as an additional HOI graph. SG2Caps outperforms existing scene graph-only captioning models by a large margin, indicating scene graphs as a promising representation for image captioning. Direct utilization of scene graph labels avoids expensive graph convolutions over high-dimensional CNN features resulting in 49% fewer trainable parameters. Our code is available at: https://github.com/Kien085/SG2Caps
翻译:主流图像字幕模型依赖于 Convolutional NealNetwork (CNN) 图像配置功能, 以通过重复式模型生成字幕。 最近, 图像场景图表被用于增加字幕模型, 以利用其结构语义学, 如对象实体、 关系和属性。 一些研究指出, 黑盒现场图形生成器的场景图形天真地使用场景图形伤害图像字幕性能, 以及基于场景图形字幕模型必须承担直接使用图像功能生成像样字幕的间接责任。 应对这些挑战, 我们提议了\ textbf{SG2 Caps}, 这个框架只使用场景图标签来提高竞争性图像字幕性能。 基本的想法是缩小两个场景图之间的语义差距, 其中一个来自输入图像图像图, 另一个来自其标题。 为了达到这个目的, 我们利用天体空间位置和人类- Objective- Interaction (HOI) 标签作为额外的 HOMI 图表。 SG2 Capts 超越了现有的只显示场景图形的图示意图模型, 显示在大边端端点上的高位图中, 将显示为高位图像图像图。 在 Grealgreab/ dregreal dreal 上, laps preal dreal labs pres laus