Video captioning aims to describe the content of videos using natural language. Although significant progress has been made, there is still much room to improve the performance for real-world applications, mainly due to the long-tail words challenge. In this paper, we propose a text with knowledge graph augmented transformer (TextKG) for video captioning. Notably, TextKG is a two-stream transformer, formed by the external stream and internal stream. The external stream is designed to absorb additional knowledge, which models the interactions between the additional knowledge, e.g., pre-built knowledge graph, and the built-in information of videos, e.g., the salient object regions, speech transcripts, and video captions, to mitigate the long-tail words challenge. Meanwhile, the internal stream is designed to exploit the multi-modality information in videos (e.g., the appearance of video frames, speech transcripts, and video captions) to ensure the quality of caption results. In addition, the cross attention mechanism is also used in between the two streams for sharing information. In this way, the two streams can help each other for more accurate results. Extensive experiments conducted on four challenging video captioning datasets, i.e., YouCookII, ActivityNet Captions, MSRVTT, and MSVD, demonstrate that the proposed method performs favorably against the state-of-the-art methods. Specifically, the proposed TextKG method outperforms the best published results by improving 18.7% absolute CIDEr scores on the YouCookII dataset.
翻译:视频字幕生成旨在使用自然语言描述视频内容。尽管已经取得了显著进展,但考虑到长尾词挑战,实际应用中的性能仍有很大提升空间。本文提出了一种基于文本与知识图谱增强的 transformer(TextKG)来解决视频字幕生成中的长尾词挑战。值得注意的是,TextKG 是一个双流 transformer,由外部流和内部流构成。外部流旨在吸收更多知识,通过建模额外知识(例如预构建的知识图谱)与视频内置信息(例如显著的对象区域、语音转录和视频字幕)之间的交互作用,来缓解长尾词挑战。同时,内部流旨在利用视频的多模态信息(例如视频帧的外观、语音转录和视频字幕),以保证字幕生成结果的质量。此外,两个流之间还使用交叉注意力机制来共享信息。通过这种方式,两个流可以相互帮助,从而获得更准确的结果。对四个具有挑战性的视频字幕生成数据集进行的广泛实验(即 YouCookII、ActivityNet Captions、MSRVTT 和 MSVD),证明了所提出的方法在效果上优于现有最好的方法。具体而言,所提出的 TextKG 方法在 YouCookII 数据集上改进了 18.7% 绝对 CIDEr 分数,优于现有最优结果。