The video captioning task is to describe the video contents with natural language by the machine. Many methods have been proposed for solving this task. A large dataset called MSR Video to Text (MSR-VTT) is often used as the benckmark dataset for testing the performance of the methods. However, we found that the human annotations, i.e., the descriptions of video contents in the dataset are quite noisy, e.g., there are many duplicate captions and many captions contain grammatical problems. These problems may pose difficulties to video captioning models for learning. We cleaned the MSR-VTT annotations by removing these problems, then tested several typical video captioning models on the cleaned dataset. Experimental results showed that data cleaning boosted the performances of the models measured by popular quantitative metrics. We recruited subjects to evaluate the results of a model trained on the original and cleaned datasets. The human behavior experiment demonstrated that trained on the cleaned dataset, the model generated captions that were more coherent and more relevant to contents of the video clips. The cleaned dataset is publicly available.
翻译:视频字幕的任务是用机器的自然语言描述视频内容。 已经提出了许多方法来完成这项任务。 一个称为 MSR Video to Text (MSR-VTTT) 的大型数据集经常被用作测试方法性能的Benckmark 数据集。 然而,我们发现, 人类注释, 即数据集中视频内容的描述非常吵闹, 例如, 有很多重复的字幕, 许多字幕含有语法问题。 这些问题可能会给视频字幕模型的学习造成困难。 我们通过消除这些问题清理了 MSR- VTT 说明, 然后在清理的数据集中测试了几个典型的视频字幕模型。 实验结果表明, 数据清理提高了以流行量化指标测量的模型的性能。 我们聘用了一些对象来评价在原始和清洁数据集上培训过的模型的结果。 人类行为实验证明, 清洁数据集的培训, 模型生成的字幕与视频剪辑的内容更加一致, 更加相关。 清洁的数据集可以公开查阅 。