Video captioning is a challenging task that necessitates a thorough comprehension of visual scenes. Existing methods follow a typical one-to-one mapping, which concentrates on a limited sample space while ignoring the intrinsic semantic associations between samples, resulting in rigid and uninformative expressions. To address this issue, we propose a novel and flexible framework, namely Support-set based Multi-modal Representation Enhancement (SMRE) model, to mine rich information in a semantic subspace shared between samples. Specifically, we propose a Support-set Construction (SC) module to construct a support-set to learn underlying connections between samples and obtain semantic-related visual elements. During this process, we design a Semantic Space Transformation (SST) module to constrain relative distance and administrate multi-modal interactions in a self-supervised way. Extensive experiments on MSVD and MSR-VTT datasets demonstrate that our SMRE achieves state-of-the-art performance.
翻译:视频字幕是一项具有挑战性的任务,需要彻底理解视觉场景。 现有的方法遵循典型的一对一绘图方法,它集中在有限的样本空间上,而忽略了样本之间固有的语义联系,从而形成僵硬和无信息化的表达方式。 为解决这一问题,我们提出了一个创新和灵活的框架,即基于支持的基于多模式的演示模型,在样本之间共享的语义分空中挖掘丰富的信息。具体地说,我们提议了一个支持集构建模块,以构建一个支持集,以学习样本之间的内在连接并获取语义相关视觉元素。在此过程中,我们设计了一个语义空间转型模块,以限制相对的距离并以自我监督的方式管理多模式互动。关于MSVD和MSR-VTT数据集的广泛实验表明,我们的SMRE实现了最先进的性能。