Connecting Vision and Language plays an essential role in Generative Intelligence. For this reason, in the last few years, a large research effort has been devoted to image captioning, i.e. the task of describing images with syntactically and semantically meaningful sentences. Starting from 2015 the task has generally been addressed with pipelines composed of a visual encoding step and a language model for text generation. During these years, both components have evolved considerably through the exploitation of object regions, attributes, and relationships and the introduction of multi-modal connections, fully-attentive approaches, and BERT-like early-fusion strategies. However, regardless of the impressive results obtained, research in image captioning has not reached a conclusive answer yet. This work aims at providing a comprehensive overview and categorization of image captioning approaches, from visual encoding and text generation to training strategies, used datasets, and evaluation metrics. In this respect, we quantitatively compare many relevant state-of-the-art approaches to identify the most impactful technical innovations in image captioning architectures and training strategies. Moreover, many variants of the problem and its open challenges are analyzed and discussed. The final goal of this work is to serve as a tool for understanding the existing state-of-the-art and highlighting the future directions for an area of research where Computer Vision and Natural Language Processing can find an optimal synergy.
翻译:由于这一原因,在过去几年里,对图像字幕进行了大量研究,即以综合和语义上有意义的句子描述图像,从2015年起,任务一般是通过由视觉编码步骤和文本生成语言模型组成的管道来解决的;这些年来,这两个组成部分都通过利用目标区域、属性和关系以及采用多模式连接、充分加速办法和类似BERT的早期融合战略而发生了显著变化;然而,尽管取得了令人印象深刻的成果,但图像字幕研究尚未达成结论性答案;这项工作旨在对图像字幕描述方法进行全面的概述和分类,从视觉编码和文本生成到培训战略、使用数据集和评价衡量标准;在这方面,我们从数量上比较了许多相关的最新方法,以确定图像描述结构和培训战略中最具影响力的技术创新;此外,对问题的许多变式及其公开挑战都作了结论性回答;这项工作旨在对图像标识方法进行综合概述和分类,从视觉编码和文本生成到培训战略、数据集、数据组和评估衡量标准;在这方面,我们从数量上比较了许多相关的最新方法,以确定在图像描述结构和培训战略方面最具有影响力的技术创新;此外,对于问题及其公开挑战的许多变式的变式,正在分析并讨论关于当前研究方向的最后工具领域,以便了解和讨论。