The performance of a zero-shot sketch-based image retrieval (ZS-SBIR) task is primarily affected by two challenges. The substantial domain gap between image and sketch features needs to be bridged, while at the same time the side information has to be chosen tactfully. Existing literature has shown that varying the semantic side information greatly affects the performance of ZS-SBIR. To this end, we propose a novel graph transformer based zero-shot sketch-based image retrieval (GTZSR) framework for solving ZS-SBIR tasks which uses a novel graph transformer to preserve the topology of the classes in the semantic space and propagates the context-graph of the classes within the embedding features of the visual space. To bridge the domain gap between the visual features, we propose minimizing the Wasserstein distance between images and sketches in a learned domain-shared space. We also propose a novel compatibility loss that further aligns the two visual domains by bridging the domain gap of one class with respect to the domain gap of all other classes in the training set. Experimental results obtained on the extended Sketchy, TU-Berlin, and QuickDraw datasets exhibit sharp improvements over the existing state-of-the-art methods in both ZS-SBIR and generalized ZS-SBIR.
翻译:以零光草图为基础的图像检索(ZS-SBIR)任务的执行情况主要受到两个挑战的影响。图像和草图特征之间的巨大领域差距需要弥合,同时需要灵活选择侧面信息。现有文献表明,语义侧面信息的差异极大地影响了ZS-SBIR的绩效。为此,我们提议建立一个基于零光草图图像检索(GTZSR)的新颖图形变异器框架,用于解决ZS-SBIR任务,该框架使用新的图形变异器来保存语义空间各班级的地形,并在视觉空间嵌入特征中传播各班的背景图。为了缩小视觉特征之间的领域差距,我们提议尽量减少瓦森斯坦图像和草图之间的距离,以学习的域共享空间。我们还提出一个新的兼容性损失,即缩小一个类的域与培训成套所有其他类的域差距之间的距离。在扩展的Sketchychychy、TU-BERS-S-S-Slgal-S-S-Slgal-S-S-Slapal-Stal-S-S-Slapal-S-S-Slaft-S-S-S-S-Slaft-S-S-S-S-S-Slap-S-S-S-Slap-S-Sl-S-S-Slapal-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-