Video captioning is a challenging task as it needs to accurately transform visual understanding into natural language description. To date, state-of-the-art methods inadequately model global-local representation across video frames for caption generation, leaving plenty of room for improvement. In this work, we approach the video captioning task from a new perspective and propose a GL-RG framework for video captioning, namely a \textbf{G}lobal-\textbf{L}ocal \textbf{R}epresentation \textbf{G}ranularity. Our GL-RG demonstrates three advantages over the prior efforts: 1) we explicitly exploit extensive visual representations from different video ranges to improve linguistic expression; 2) we devise a novel global-local encoder to produce rich semantic vocabulary to obtain a descriptive granularity of video contents across frames; 3) we develop an incremental training strategy which organizes model learning in an incremental fashion to incur an optimal captioning behavior. Experimental results on the challenging MSR-VTT and MSVD datasets show that our DL-RG outperforms recent state-of-the-art methods by a significant margin. Code is available at \url{https://github.com/ylqi/GL-RG}.
翻译:视频字幕是一项艰巨的任务,因为它需要准确地将视觉理解转换成自然语言描述。 到目前为止,最先进的方法在视频框架中的全局和本地代表性模型不足以在视频框架中为字幕生成提供足够改进的空间。 在这项工作中,我们从新的角度处理视频字幕任务,并提出视频字幕GL-RG框架,即\ textbf{G}G}Lbal- textbf{R}}L}ocal\textb{R}presentation\ textbf{G}ranality。我们的GL-RG展示了三个优势。比先前的努力:(1) 我们明确利用不同视频范围的广泛的视频展示来改进语言表达;(2) 我们设计了一个新的全球-本地编码器,以制作丰富的语义词汇,以获得跨框架视频内容的描述性粒子;(3) 我们制定了一个渐进式培训战略,以渐进式方式组织示范学习,以产生最佳的字幕行为。 挑战性MSR-VTTT和MSVD数据集的实验性结果显示三个优势:(1) 我们的DL-RG超越了最近的州-f-b-ar-ar-ar-ar-rg_r/rg_rg_rg_rg_Q_Qrg_ 一种重要的工具。</s>