Video captioning targets interpreting the complex visual contents as text descriptions, which requires the model to fully understand video scenes including objects and their interactions. Prevailing methods adopt off-the-shelf object detection networks to give object proposals and use the attention mechanism to model the relations between objects. They often miss some undefined semantic concepts of the pretrained model and fail to identify exact predicate relationships between objects. In this paper, we investigate an open research task of generating text descriptions for the given videos, and propose Cross-Modal Graph (CMG) with meta concepts for video captioning. Specifically, to cover the useful semantic concepts in video captions, we weakly learn the corresponding visual regions for text descriptions, where the associated visual regions and textual words are named cross-modal meta concepts. We further build meta concept graphs dynamically with the learned cross-modal meta concepts. We also construct holistic video-level and local frame-level video graphs with the predicted predicates to model video sequence structures. We validate the efficacy of our proposed techniques with extensive experiments and achieve state-of-the-art results on two public datasets.
翻译:将复杂的视觉内容解释为文字描述的视频字幕目标,这要求模型充分理解视频场景,包括物体及其互动。常用方法采用现成的物体探测网络,提供对象建议,并利用关注机制来模拟物体之间的关系。它们往往错过了预选模型中一些未定义的语义概念,未能确定物体之间的确切上游关系。在本文中,我们调查为特定视频制作文字描述的公开研究任务,并提议跨模版图,并配有视频字幕的元概念。具体来说,为了涵盖视频字幕中的有用语义概念,我们不太了解相应的文字描述区域,在其中相关的视觉区域和文字文字文字词被命名为跨模式的元概念。我们进一步与所学的跨模式元概念一起动态地构建元概念图。我们还用预测的上游来模拟视频序列结构,构建整体的视频级别和地方框架级图像图。我们通过广泛的实验来验证我们拟议技术的功效,并在两个公共数据集中实现最新艺术结果。