Current deep learning models often achieve excellent results on benchmark image-to-text datasets but fail to generate texts that are useful in practice. We argue that to close this gap, it is vital to distinguish descriptions from captions based on their distinct communicative roles. Descriptions focus on visual features and are meant to replace an image (often to increase accessibility), whereas captions appear alongside an image to supply additional information. To motivate this distinction and help people put it into practice, we introduce the publicly available Wikipedia-based dataset Concadia consisting of 96,918 images with corresponding English-language descriptions, captions, and surrounding context. Using insights from Concadia, models trained on it, and a preregistered human-subjects experiment with human- and model-generated texts, we characterize the commonalities and differences between descriptions and captions. In addition, we show that, for generating both descriptions and captions, it is useful to augment image-to-text models with representations of the textual context in which the image appeared.
翻译:目前深层次的学习模型往往在基准图像到文本数据集方面取得极好的结果,但未能产生实际有用的文本。我们争辩说,为了缩小这一差距,必须区分说明和基于其独特的交流作用的字幕。描述侧重于视觉特征,意在取代图像(通常是为了增加访问),而标题则与图像同时出现,以提供额外信息。为了激发这种区别并帮助人们将其付诸实践,我们引入了以维基百科为基础的可公开获得的数据集Concadia,由96 918张图像组成,配有相应的英语描述、说明和周围背景。我们利用Concadia的洞察力、经过培训的模型以及预先登记的人类主题实验,以人文和模型生成的文本为特点和差异。此外,我们表明,为了生成描述和说明,用图像的文字背景来增强图像到文字模型是有用的。