Evaluating video captioning systems is a challenging task as there are multiple factors to consider; for instance: the fluency of the caption, multiple actions happening in a single scene, and the human bias of what is considered important. Most metrics try to measure how similar the system generated captions are to a single or a set of human-annotated captions. This paper presents a new method based on a deep learning model to evaluate these systems. The model is based on BERT, which is a language model that has been shown to work well in multiple NLP tasks. The aim is for the model to learn to perform an evaluation similar to that of a human. To do so, we use a dataset that contains human evaluations of system generated captions. The dataset consists of the human judgments of the captions produce by the system participating in various years of the TRECVid video to text task. These annotations will be made publicly available. BERTHA obtain favourable results, outperforming the commonly used metrics in some setups.
翻译:评估视频字幕系统是一项具有挑战性的任务,因为有许多因素需要考虑;例如:字幕的流利性、在单一场景中发生的多重动作以及被认为重要的人的偏向。 多数指标试图衡量系统生成的字幕与单个或一组人类附加说明的字幕的相似性。 本文介绍了基于深层次学习模型的新方法来评价这些系统。 该模型以BERT为基础, 这是一种语言模型, 在多项NLP任务中显示效果良好。 目的是让模型学会进行与人类类似的评价。 要做到这一点, 我们使用包含对系统生成的字幕进行人类评价的数据集。 数据集包含参与TRECVid视频不同年份的系统制作的字幕的人类判断文字任务。 这些说明将被公诸于众。 BERCHA获得了好的结果, 在某些设置中表现了常用的尺度。