A major challenge for video captioning is to combine audio and visual cues. Existing multi-modal fusion methods have shown encouraging results in video understanding. However, the temporal structures of multiple modalities at different granularities are rarely explored, and how to selectively fuse the multi-modal representations at different levels of details remains uncharted. In this paper, we propose a novel hierarchically aligned cross-modal attention (HACA) framework to learn and selectively fuse both global and local temporal dynamics of different modalities. Furthermore, for the first time, we validate the superior performance of the deep audio features on the video captioning task. Finally, our HACA model significantly outperforms the previous best systems and achieves new state-of-the-art results on the widely used MSR-VTT dataset.
翻译:视频字幕的一个主要挑战是将音频和视觉提示结合起来。现有的多模式聚合方法在视频理解方面显示了令人鼓舞的结果。然而,不同微粒的多种模式的时间结构很少得到探讨,如何有选择地将不同细节层次的多模式表述组合起来,这一点仍没有被探讨出来。在本文件中,我们提议建立一个按等级排列的新型跨模式关注框架,以学习和有选择地结合不同模式的全球和当地时间动态。此外,我们首次验证了视频字幕任务中深层音频特征的优异性。最后,我们HACA模型大大超越了以往的最佳系统,并在广泛使用的MSR-VTT数据集上取得了新的最新结果。