News Image Captioning requires describing an image by leveraging additional context from a news article. Previous works only coarsely leverage the article to extract the necessary context, which makes it challenging for models to identify relevant events and named entities. In our paper, we first demonstrate that by combining more fine-grained context that captures the key named entities (obtained via an oracle) and the global context that summarizes the news, we can dramatically improve the model's ability to generate accurate news captions. This begs the question, how to automatically extract such key entities from an image? We propose to use the pre-trained vision and language retrieval model CLIP to localize the visually grounded entities in the news article and then capture the non-visual entities via an open relation extraction model. Our experiments demonstrate that by simply selecting a better context from the article, we can significantly improve the performance of existing models and achieve new state-of-the-art performance on multiple benchmarks.
翻译:新闻图片描述要求通过利用新闻文章中的额外背景来描述图像。 以前的作品只是粗略地利用文章来提取必要的背景, 这使得模型难以识别相关事件和命名实体。 在我们的论文中, 我们首先证明, 通过将包含关键命名实体( 通过一个神器)的精细背景和概述新闻的全球背景结合起来, 我们可以大幅提高模型生成准确新闻字幕的能力。 这就引出了一个问题, 如何从图像中自动提取这些关键实体? 我们提议使用预先培训的视觉和语言检索模型 CLIP 将视觉基础实体定位在新闻文章中, 然后通过开放关系提取模型捕捉非视觉实体。 我们的实验表明, 只要从文章中选择更好的背景, 我们就可以大幅提高现有模型的性能, 并在多个基准上实现新的最新状态表现 。