Research in massively multilingual image captioning has been severely hampered by a lack of high-quality evaluation datasets. In this paper we present the Crossmodal-3600 dataset (XM3600 in short), a geographically diverse set of 3600 images annotated with human-generated reference captions in 36 languages. The images were selected from across the world, covering regions where the 36 languages are spoken, and annotated with captions that achieve consistency in terms of style across all languages, while avoiding annotation artifacts due to direct translation. We apply this benchmark to model selection for massively multilingual image captioning models, and show superior correlation results with human evaluations when using XM3600 as golden references for automatic metrics.
翻译:由于缺少高质量的评价数据集,大量多语种图像字幕的研究受到严重阻碍。在本文件中,我们展示了跨模式-3600数据集(简称XM3600),这套具有地理多样性的3600图像,附有36种语言的人类产生的参考字幕。这些图像是从世界各地选择的,覆盖了36种语言使用的区域,并附加了在各种语言风格上实现一致性的字幕,同时避免了直接翻译造成的批注。我们将这一基准用于大规模多语种图像字幕模型的模型选择,并在使用XM3600作为自动计量的黄金参考时,显示与人类评估的高度相关性。