A key challenge in video question answering is how to realize the cross-modal semantic alignment between textual concepts and corresponding visual objects. Existing methods mostly seek to align the word representations with the video regions. However, word representations are often not able to convey a complete description of textual concepts, which are in general described by the compositions of certain words. To address this issue, we propose to first build a syntactic dependency tree for each question with an off-the-shelf tool and use it to guide the extraction of meaningful word compositions. Based on the extracted compositions, a hypergraph is further built by viewing the words as nodes and the compositions as hyperedges. Hypergraph convolutional networks (HCN) are then employed to learn the initial representations of word compositions. Afterwards, an optimal transport based method is proposed to perform cross-modal semantic alignment for the textual and visual semantic space. To reflect the cross-modal influences, the cross-modal information is incorporated into the initial representations, leading to a model named cross-modality-aware syntactic HCN. Experimental results on three benchmarks show that our method outperforms all strong baselines. Further analyses demonstrate the effectiveness of each component, and show that our model is good at modeling different levels of semantic compositions and filtering out irrelevant information.
翻译:视频解答中的一个关键挑战是如何实现文本概念和相应视觉对象之间的跨模式语义匹配。 现有方法大多寻求将文字表达方式与视频区域统一起来。 但是, 文字表达方式通常无法传达对文本概念的完整描述, 这些概念一般由某些单词的构成来描述。 为了解决这一问题, 我们提议首先为每个问题建立一个具有超现工具的合成依赖树, 并用它来指导有意义的单词构成的提取。 根据提取的构件, 将单词视为节点, 和组成方式视为高端, 从而进一步构建了超模。 然后, 高图共产网络(HCN) 来学习文字构成的初始表达方式。 之后, 提出一种基于最佳运输方法, 以对文本和视觉语义空间进行跨模式的语义调节。 为了反映跨模式的影响, 跨模式信息被纳入初始演示中, 导致一个名为跨模式- 模式交错式循环的语义, 将组成结构视为高端。 实验性共产网络网络(HCN) 用于学习单词构成的初始分析, 三个基准显示我们最强的精细的模型分析。 。 。 实验性分析是所有方法的精确分析的三基准,, 展示了我们不同的基准, 展示了我们的方法基础, 展示了我们不同的基准, 展示了所有方法, 。