To generate proper captions for videos, the inference needs to identify relevant concepts and pay attention to the spatial relationships between them as well as to the temporal development in the clip. Our end-to-end encoder-decoder video captioning framework incorporates two transformer-based architectures, an adapted transformer for a single joint spatio-temporal video analysis as well as a self-attention-based decoder for advanced text generation. Furthermore, we introduce an adaptive frame selection scheme to reduce the number of required incoming frames while maintaining the relevant content when training both transformers. Additionally, we estimate semantic concepts relevant for video captioning by aggregating all ground truth captions of each sample. Our approach achieves state-of-the-art results on the MSVD, as well as on the large-scale MSR-VTT and the VATEX benchmark datasets considering multiple Natural Language Generation (NLG) metrics. Additional evaluations on diversity scores highlight the expressiveness and diversity in the structure of our generated captions.
翻译:为制作适当的视频字幕,推断需要确定相关概念,注意它们之间的空间关系以及剪辑的时间发展。我们的终端到终端编码解码器视频字幕框架包含两个基于变压器的架构,一个用于单一联合时空视频分析的改造变压器,以及一个用于高级文本生成的自控解码器。此外,我们引入了适应性框架选择计划,以减少所需输入框架的数量,同时在培训两个变压器时保持相关内容。此外,我们通过汇总每个样本的所有地面真象字幕来估计与视频字幕相关的语义概念。我们的方法在MSVD上以及大型MSR-VTT和VATEX基准数据集上取得了最新的最新结果,其中考虑到多种自然语言生成(NLG)的衡量标准。关于多样性评分的更多评价突出了我们生成字幕结构的清晰性和多样性。