Pretrain-Finetune paradigm recently becomes prevalent in many NLP tasks: question answering, text classification, sequence labeling and so on. As the state-of-the-art model, BERT pre-trained on the general corpus (e.g., Wikipedia) have been widely used in these tasks. However, these BERT-style models still show limitations on some scenarios, especially for two: a corpus that contains very different text from the general corpus Wikipedia, or a task that learns embedding spacial distribution for specific purpose (e.g., approximate nearest neighbor search). In this paper, to tackle the above dilemmas we also encounter in an industrial e-commerce search system, we propose novel customized pre-training tasks for two critical modules: user intent detection and semantic embedding retrieval. The customized pre-trained models with specific fine-tuning, being less than 10% of BERT-base's size in order to be feasible for cost-efficient CPU serving, significantly improves its other counterparts on both offline evaluation metrics and online benefits. We have open sourced our datasets for the sake of reproducibility and future works.
翻译:由于最先进的模型,BERT在一般材料(例如维基百科)上经过预先培训,因此在这些任务中被广泛使用。然而,这些BERT式模型在某些情景上仍然表现出局限性,特别是两种情景:一个包含与一般版本维基百科非常不同的文本的文体,或一项为特定目的(例如,近似近邻搜索)学习嵌入和平分布的任务。 在本文中,为了解决我们在工业电子商务搜索系统中也遇到的上述困境,我们提出了针对两个关键模块的新定制的培训前任务:用户意图探测和语义嵌入检索。定制的有具体微调的预先培训模型,为低成本高效的CPU服务提供不到10%,大大改进了离线评价指标和在线效益方面的其他对应方。我们为重新定位和未来工程打开了我们的数据源。