Generating grammatically and semantically correct captions in video captioning is a challenging task. The captions generated from the existing methods are either word-by-word that do not align with grammatical structure or miss key information from the input videos. To address these issues, we introduce a novel global-local fusion network, with a Global-Local Fusion Block (GLFB) that encodes and fuses features from different parts of speech (POS) components with visual-spatial features. We use novel combinations of different POS components - 'determinant + subject', 'auxiliary verb', 'verb', and 'determinant + object' for supervision of the POS blocks - Det + Subject, Aux Verb, Verb, and Det + Object respectively. The novel global-local fusion network together with POS blocks helps align the visual features with language description to generate grammatically and semantically correct captions. Extensive qualitative and quantitative experiments on benchmark MSVD and MSRVTT datasets demonstrate that the proposed approach generates more grammatically and semantically correct captions compared to the existing methods, achieving the new state-of-the-art. Ablations on the POS blocks and the GLFB demonstrate the impact of the contributions on the proposed method.
翻译:在视频字幕生成中,生成语法和语义正确的字幕是一个具有挑战性的任务。现有方法生成的字幕要么逐字翻译,不符合语法结构,要么遗漏了输入视频的关键信息。为了解决这些问题,我们引入了一种新颖的全局-局部融合网络,通过全局-局部融合块 (GLFB) 从不同语言成分的 feature 对视觉空间特征进行编码和融合。我们使用不同语言成分的新组合,即“冠词+主语”、“助动词”、“动词”和“冠词+对象”来监督语言成分块--Det + Subject、Aux Verb、Verb和Det + Object。新颖的全局-局部融合网络及语言成分块有助于对齐视觉特征和语言描述,生成语法和语义正确的字幕。对 benchmark MSVD 和 MSRVTT 数据集进行的广泛的定性和定量实验表明,所提出的方法比现有方法生成更语法和语义正确的字幕,达到了新的最优表现。POS块和GLFB的消融实验展示了对提出的方法的贡献的影响。