Product matching is a fundamental step for the global understanding of consumer behavior in e-commerce. In practice, product matching refers to the task of deciding if two product offers from different data sources (e.g. retailers) represent the same product. Standard pipelines use a previous stage called blocking, where for a given product offer a set of potential matching candidates are retrieved based on similar characteristics (e.g. same brand, category, flavor, etc.). From these similar product candidates, those that are not a match can be considered hard negatives. We present Block-SCL, a strategy that uses the blocking output to make the most of Supervised Contrastive Learning (SCL). Concretely, Block-SCL builds enriched batches using the hard-negatives samples obtained in the blocking stage. These batches provide a strong training signal leading the model to learn more meaningful sentence embeddings for product matching. Experimental results in several public datasets demonstrate that Block-SCL achieves state-of-the-art results despite only using short product titles as input, no data augmentation, and a lighter transformer backbone than competing methods.
翻译:产品匹配是全球了解电子商务中消费者行为的基本步骤。 在实践中,产品匹配是指决定不同数据来源(如零售商)提供两种产品是否为同一产品的任务。 标准输油管使用前一个阶段称为封隔,对于特定产品来说,根据相似的特性(如同一品牌、类别、口味等)检索出一套潜在的匹配候选人。 从这些类似的产品候选产品中,不匹配的产品可被视为硬性负值。 我们介绍了块- SCL, 该战略利用阻塞输出使超级反竞争学习(SCL)达到最大程度。 具体来说, Block- SCL利用硬负式样本制造浓缩批次,这些批次提供了强有力的培训信号,引导模型学习更有意义的句子嵌入产品匹配。 几个公共数据集的实验结果显示,Block-SCL尽管只使用短的产品标题作为投入,没有数据增强,而且比竞争方法更轻的变压器骨架。