Product retrieval is of great importance in the ecommerce domain. This paper introduces our 1st-place solution in eBay eProduct Visual Search Challenge (FGVC9), which is featured for an ensemble of about 20 models from vision models and vision-language models. While model ensemble is common, we show that combining the vision models and vision-language models brings particular benefits from their complementarity and is a key factor to our superiority. Specifically, for the vision models, we use a two-stage training pipeline which first learns from the coarse labels provided in the training set and then conducts fine-grained self-supervised training, yielding a coarse-to-fine metric learning manner. For the vision-language models, we use the textual description of the training image as the supervision signals for fine-tuning the image-encoder (feature extractor). With these designs, our solution achieves 0.7623 MAR@10, ranking the first place among all the competitors. The code is available at: \href{https://github.com/WangWenhao0716/V2L}{V$^2$L}.
翻译:在电子商务领域,产品检索非常重要。本文介绍了我们第一个在eBay eProduction视觉搜索挑战(FGVC9)中的第一站解决方案,该解决方案是来自视觉模型和视觉语言模型的约20个模型的组合。模型组合是常见的,我们显示,将视觉模型和视觉语言模型结合起来将特别受益于它们的互补性,是我们优势的一个关键因素。具体地说,对于愿景模型,我们使用一个两阶段培训管道,首先从成套培训提供的粗皮标签中学习,然后进行精细的自我监督培训,产生粗略到松式的学习方式。对于愿景模型,我们使用培训图像图像图像图像的文字描述作为监管信号(性提取器),有了这些设计,我们的解决方案达到了0.7623 MAR@10,在所有竞争者中排名第一。代码可以在以下网址上查到:\hrefhttps://github.com/wangWangWa0716/V ⁇ L=%L@{g_L}。