We present a system that allows users to train their own state-of-the-art paraphrastic sentence representations in a variety of languages. We also release trained models for English, Arabic, German, French, Spanish, Russian, Turkish, and Chinese. We train these models on large amounts of data, achieving significantly improved performance from the original papers proposing the methods on a suite of monolingual semantic similarity, cross-lingual semantic similarity, and bitext mining tasks. Moreover, the resulting models surpass all prior work on unsupervised semantic textual similarity, significantly outperforming even BERT-based models like Sentence-BERT (Reimers and Gurevych, 2019). Additionally, our models are orders of magnitude faster than prior work and can be used on CPU with little difference in inference speed (even improved speed over GPU when using more CPU cores), making these models an attractive choice for users without access to GPUs or for use on embedded devices. Finally, we add significantly increased functionality to the code bases for training paraphrastic sentence models, easing their use for both inference and for training them for any desired language with parallel data. We also include code to automatically download and preprocess training data.
翻译:我们提出了一个系统,使用户能够以多种语言培训自己最先进的语法文字表达方式;我们还发布了英文、阿拉伯文、德文、法文、西班牙文、俄文、土耳其文和中文的经过培训的模型;我们用大量数据来培训这些模型;从最初的论文中大大改进了在一套单一语言语义相似性、跨语言语义相似性和位数采矿任务方面提出方法的原始文件的性能;此外,所产生的模型超过了先前在未经监督的语义文本相似性方面所做的一切工作,甚至大大超过BERT(Reimers和Gurevych, 2019年)等基于句子的模型。此外,我们的模型比先前的工作要快得多,而且可以在CPU上使用,在推断速度方面没有什么差别(在使用更多的CPU时,速度比GPU还要快得多),使这些模型成为无法使用GPU或用于嵌入装置的用户的吸引力选择。最后,我们大大增加了用于培训par句模型的代码基础的功能,大大地超过BERT(Reimers and Gurevyvych, 2019) 。此外,我们的模型的使用速度比其使用速度更快。我们还可以用于任何平行的数据也包括了数据下载前的培训。