Image Captioning is a current research task to describe the image content using the objects and their relationships in the scene. To tackle this task, two important research areas converge, artificial vision, and natural language processing. In Image Captioning, as in any computational intelligence task, the performance metrics are crucial for knowing how well (or bad) a method performs. In recent years, it has been observed that classical metrics based on n-grams are insufficient to capture the semantics and the critical meaning to describe the content in an image. Looking to measure how well or not the set of current and more recent metrics are doing, in this article, we present an evaluation of several kinds of Image Captioning metrics and a comparison between them using the well-known MS COCO dataset. The metrics were selected from the most used in prior works, they are those based on $n$-grams as BLEU, SacreBLEU, METEOR, ROGUE-L, CIDEr, SPICE, and those based on embeddings, such as BERTScore and CLIPScore. For this, we designed two scenarios; 1) a set of artificially build captions with several qualities, and 2) a comparison of some state-of-the-art Image Captioning methods. Interesting findings were found trying to answer the questions: Are the current metrics helping to produce high-quality captions? How do actual metrics compare to each other? What are the metrics really measuring?
翻译:图像描述是当前用对象描述图像内容及其在现场关系的一项研究任务。 要完成这项任务, 我们有两个重要的研究领域, 人工视觉和自然语言处理。 在图像描述中, 与任何计算智能任务一样, 性能衡量对于了解一种方法的运行情况有多好( 或差) 至关重要。 近年来, 人们发现基于 n 克的古典度量不足以捕捉图像内容的语义和关键含义 。 想要测量当前和最新指标集是否正在做的很好, 文章中, 我们展示了几种类型的图像显示度量的评价, 并用众所周知的 MS COCO数据集比较它们之间的比较。 这些度量是以前工作中最常用的方法所选取的。 近年, 人们发现基于 $- ggggs的经典度指标不足以捕捉到某图像内容的语义和关键含义 。 我们设计了两种图表质量的比喻, 一种是“ 数字”, 一种是“ 数字” 和“ 数字 ” 答案 。