Representation learning for sketch-based image retrieval has mostly been tackled by learning embeddings that discard modality-specific information. As instances from different modalities can often provide complementary information describing the underlying concept, we propose a cross-attention framework for Vision Transformers (XModalViT) that fuses modality-specific information instead of discarding them. Our framework first maps paired datapoints from the individual photo and sketch modalities to fused representations that unify information from both modalities. We then decouple the input space of the aforementioned modality fusion network into independent encoders of the individual modalities via contrastive and relational cross-modal knowledge distillation. Such encoders can then be applied to downstream tasks like cross-modal retrieval. We demonstrate the expressive capacity of the learned representations by performing a wide range of experiments and achieving state-of-the-art results on three fine-grained sketch-based image retrieval benchmarks: Shoe-V2, Chair-V2 and Sketchy. Implementation is available at https://github.com/abhrac/xmodal-vit.
翻译:用于素描图像检索的代表学习大多通过学习嵌入式,抛弃特定模式的信息来解决。不同模式的实例往往可以提供描述基本概念的补充信息,因此我们提议为愿景变异器(XModalViT)提供一个交叉注意框架,将特定模式的信息结合起来,而不是丢弃它们。我们的框架首先将个人照片和素描模式的数据点与统一两种模式信息的组合式表达式相配,然后将上述模式融合网络的投入空间通过对比性和关联性跨模式知识蒸馏方式分离成单个模式的独立编码器。这些编码器随后可用于跨模式检索等下游任务。我们通过进行一系列广泛的实验和在三个基于精细草图的图像检索基准(Shoe-V2、主席-V2和Sketsywy)上取得最新结果,展示了所学习的表述的清晰能力。我们可在https://github.com/abhrac/xmodal-vit上查阅。我们可在https://github.com/abal-xmodal-vit查阅。