This paper proposes a novel diffusion-based model, CompoDiff, for solving Composed Image Retrieval (CIR) with latent diffusion and presents a newly created dataset of 18 million reference images, conditions, and corresponding target image triplets to train the model. CompoDiff not only achieves a new zero-shot state-of-the-art on a CIR benchmark such as FashionIQ but also enables a more versatile CIR by accepting various conditions, such as negative text and image mask conditions, which are unavailable with existing CIR methods. In addition, the CompoDiff features are on the intact CLIP embedding space so that they can be directly used for all existing models exploiting the CLIP space. The code and dataset used for the training, and the pre-trained weights are available at https://github.com/navervision/CompoDiff
翻译:本文提出了一种新型的基于扩散的模型CompoDiff,用于解决具有潜在扩散的组合图像检索(CIR)问题,并介绍了一个新创建的数据集,包括1800万参考图像、条件和相应的目标图像三元组,以用于训练该模型。CompoDiff不仅在诸如FashionIQ等CIR基准测试中实现了新的零样本最新技术,而且还通过接受各种条件(例如负面文本和图像遮罩条件),使CIR更加通用。此外,CompoDiff的特征是在完整的CLIP嵌入空间上,因此它们可以直接用于利用CLIP空间的所有现有模型。代码和用于训练的数据集,以及预先训练权重可在https://github.com/navervision/CompoDiff中找到。