We address the challenging problem of image captioning by revisiting the representation of image scene graph. At the core of our method lies the decomposition of a scene graph into a set of sub-graphs, with each sub-graph capturing a semantic component of the input image. We design a deep model to select important sub-graphs, and to decode each selected sub-graph into a single target sentence. By using sub-graphs, our model is able to attend to different components of the image. Our method thus accounts for accurate, diverse, grounded and controllable captioning at the same time. We present extensive experiments to demonstrate the benefits of our comprehensive captioning model. Our method establishes new state-of-the-art results in caption diversity, grounding, and controllability, and compares favourably to latest methods in caption quality. Our project website can be found at http://pages.cs.wisc.edu/~yiwuzhong/Sub-GC.html.
翻译:我们的方法核心在于将场景图分解成一组子图,每个子图捕捉输入图像的语义组成部分。我们设计了一个深层模型来选择重要的子图,并将每个选定的子图解成一个单一的目标句。通过使用子图,我们的模型能够处理图像的不同组成部分。因此,我们的方法可以同时计算出准确、多样、有根有据和可控的字幕。我们展示了广泛的实验,以展示我们综合字幕模型的好处。我们的方法在说明多样性、定位和可控性方面确立了新的最新状态结果,并且优于说明质量方面的最新方法。我们的项目网站可以在http://pages.cs.wisc.edu/~yiwuzhong/Sub-GC.html上找到。