Learning common subspace is prevalent way in cross-modal retrieval to solve the problem of data from different modalities having inconsistent distributions and representations that cannot be directly compared. Previous cross-modal retrieval methods focus on projecting the cross-modal data into a common space by learning the correlation between them to bridge the modality gap. However, the rich semantic information in the video and the heterogeneous nature of audio-visual data leads to more serious heterogeneous gaps intuitively, which may lead to the loss of key semantic content of video with single clue by the previous methods when eliminating the modality gap, while the semantics of the categories may undermine the properties of the original features. In this work, we aim to learn effective audio-visual representations to support audio-visual cross-modal retrieval (AVCMR). We propose a novel model that maps audio-visual modalities into two distinct shared latent subspaces: explicit and implicit shared spaces. In particular, the explicit shared space is used to optimize pairwise correlations, where learned representations across modalities capture the commonalities of audio-visual pairs and reduce the modality gap. The implicit shared space is used to preserve the distinctive features between modalities by maintaining the discrimination of audio/video patterns from different semantic categories. Finally, the fusion of the features learned from the two latent subspaces is used for the similarity computation of the AVCMR task. The comprehensive experimental results on two audio-visual datasets demonstrate that our proposed model for using two different latent subspaces for audio-visual cross-modal learning is effective and significantly outperforms the state-of-the-art cross-modal models that learn features from a single subspace.
翻译:以往的跨模式检索方法侧重于通过学习跨模式数据的相关性,将跨模式数据投射到共同空间,从而支持视听跨模式检索(AVCMR)。我们提出了一个新型模型,将音像模式映射成两个截然不同的潜在共享子空间:清晰和隐含共享空间。特别是,明确的共享空间被用于优化模式的对等关系,在各种模式中学习到的跨模式表达方式的共通性,从中学习到不同的视听组合,并缩小模式间的差距。在这项工作中,隐含共享的空间用于维护两个不同的视频-视频模式之间的独特性能,通过维护不同视频/视频模式的亚性模式,通过维护不同视频/亚性模式的亚性模式,从两个不同层面学习。