Understanding vision and language representations of product content is vital for search and recommendation applications in e-commerce. As a backbone for online shopping platforms and inspired by the recent success in representation learning research, we propose a contrastive learning framework that aligns language and visual models using unlabeled raw product text and images. We present techniques we used to train large-scale representation learning models and share solutions that address domain-specific challenges. We study the performance using our pre-trained model as backbones for diverse downstream tasks, including category classification, attribute extraction, product matching, product clustering, and adult product recognition. Experimental results show that our proposed method outperforms the baseline in each downstream task regarding both single modality and multiple modalities.
翻译:作为在线购物平台的骨干,并受最近代表性学习研究的成功启发,我们提出了一个对比式学习框架,将语言和视觉模型与未贴标签的原始产品文本和图像统一起来。我们介绍了我们用来培训大规模代表性学习模式的技术,并分享解决特定领域挑战的解决办法。我们用我们预先培训的模式作为各种下游任务的主干,包括分类、属性提取、产品匹配、产品集群和成人产品识别。实验结果表明,我们提出的方法在每一个下游任务中都超过了单一模式和多种模式的基线。