Current image captioning approaches generate descriptions which lack specific information, such as named entities that are involved in the images. In this paper we propose a new task which aims to generate informative image captions, given images and hashtags as input. We propose a simple but effective approach to tackle this problem. We first train a convolutional neural networks - long short term memory networks (CNN-LSTM) model to generate a template caption based on the input image. Then we use a knowledge graph based collective inference algorithm to fill in the template with specific named entities retrieved via the hashtags. Experiments on a new benchmark dataset collected from Flickr show that our model generates news-style image descriptions with much richer information. Our model outperforms unimodal baselines significantly with various evaluation metrics.
翻译:目前的图像说明方法产生缺乏具体信息的描述, 例如参与图像的命名实体。 在本文中, 我们提议一项新的任务, 目的是生成信息化图像说明, 提供图像和标签作为输入。 我们提出一个简单但有效的方法来解决这个问题。 我们首先训练一个革命性神经网络 - 长期短期记忆网络( CNN- LSTM) 模型, 以生成基于输入图像的模板说明。 然后我们使用基于集体推断算法的知识性图表, 用通过标签检索的具体名称实体填充模板。 对从Flickr收集的新基准数据集的实验显示, 我们的模型以更丰富的信息生成了新式图像描述。 我们的模型在各种评估指标上明显地超越了单式基线 。