A fundamental challenge faced by existing Fine-Grained Sketch-Based Image Retrieval (FG-SBIR) models is the data scarcity -- model performances are largely bottlenecked by the lack of sketch-photo pairs. Whilst the number of photos can be easily scaled, each corresponding sketch still needs to be individually produced. In this paper, we aim to mitigate such an upper-bound on sketch data, and study whether unlabelled photos alone (of which they are many) can be cultivated for performances gain. In particular, we introduce a novel semi-supervised framework for cross-modal retrieval that can additionally leverage large-scale unlabelled photos to account for data scarcity. At the centre of our semi-supervision design is a sequential photo-to-sketch generation model that aims to generate paired sketches for unlabelled photos. Importantly, we further introduce a discriminator guided mechanism to guide against unfaithful generation, together with a distillation loss based regularizer to provide tolerance against noisy training samples. Last but not least, we treat generation and retrieval as two conjugate problems, where a joint learning procedure is devised for each module to mutually benefit from each other. Extensive experiments show that our semi-supervised model yields significant performance boost over the state-of-the-art supervised alternatives, as well as existing methods that can exploit unlabelled photos for FG-SBIR.
翻译:现有 " 无标签图像检索 " (FG-SBIR)模型所面临的一个根本性挑战就是数据稀缺 -- -- 模型性能在很大程度上因缺少素描光片而受阻。虽然照片数量可以很容易地缩放,但每个相应的素描仍需要单独制作。在本文中,我们的目标是减少草图数据上层的这种高限,并研究能否单独培养无标签照片(其中许多照片)来提高性能。特别是,我们引入了一个新的半监督的跨模式检索框架(FG-SBI-SBI),这种框架可以进一步利用大型无标签照片来解释数据稀缺的原因。在我们半监督设计的中心,相近相片生成模型的目的是为无标签照片制作配对的草图。重要的是,我们进一步引入了一种歧视者指导机制来引导不忠的生成,同时根据定期化剂损失来提供耐扰动训练样品的耐受力。最后但并非最不重要的一点是,我们把非系统生成和检索作为两种配置型的推进型照片来解释数据稀缺的替代数据短缺。在我们半监督型设计设计中心设计中,每个联合学习模型,每个模型都用来展示其他成果。