Product matching corresponds to the task of matching identical products across different data sources. It typically employs available product features which, apart from being multimodal, i.e., comprised of various data types, might be non-homogeneous and incomplete. The paper shows that pre-trained, multilingual Transformer models, after fine-tuning, are suitable for solving the product matching problem using textual features both in English and Polish languages. We tested multilingual mBERT and XLM-RoBERTa models in English on Web Data Commons - training dataset and gold standard for large-scale product matching. The obtained results show that these models perform similarly to the latest solutions tested on this set, and in some cases, the results were even better. Additionally, we prepared a new dataset -- ProductMatch.pl -- that is entirely in Polish and based on offers in selected categories obtained from several online stores for the research purpose. It is the first open dataset for product matching tasks in Polish, which allows comparing the effectiveness of the pre-trained models. Thus, we also showed the baseline results obtained by the fine-tuned mBERT and XLM-RoBERTa models on the Polish datasets.
翻译:产品匹配与不同数据来源的相同产品匹配任务相对应。 它通常使用现有产品特征,这些产品特征除了包括多种数据类型之外,也可能是非同质和不完整的。 该文件表明,经过微调的经过事先训练的多语言变异器模型,在使用英文和波兰文文本特征解决产品匹配问题时,适合使用英文和波兰文文本匹配问题。 我们在网络数据共同点上测试了多种语言的 mBERT 模型和 XLM-ROBERTA 模型的英文版本---- 培训数据集和大规模产品匹配的黄金标准。 获得的结果显示,这些模型的表现与在这个集上测试的最新解决方案类似,在某些情况下,其结果甚至更好。 此外,我们编制了一套全用波兰文的新的数据集 -- -- 产品匹配模型 -- -- 完全以若干在线商店为研究目的提供的选定类别的报价为基础。 这是波兰产品匹配任务的第一个开放数据集,可以比较预先培训模型的有效性。 因此,我们还展示了波兰数据集上经过精细调的 mBERTERT和XLM-ROBERTA模型所取得的基线结果。