This paper presents the first Swedish evaluation benchmark for textual semantic similarity. The benchmark is compiled by simply running the English STS-B dataset through the Google machine translation API. This paper discusses potential problems with using such a simple approach to compile a Swedish evaluation benchmark, including translation errors, vocabulary variation, and productive compounding. Despite some obvious problems with the resulting dataset, we use the benchmark to compare the majority of the currently existing Swedish text representations, demonstrating that native models outperform multilingual ones, and that simple bag of words performs remarkably well.
翻译:本文介绍了瑞典第一个文字语义相似性评价基准。该基准是通过简单的通过谷歌机器翻译API运行英文STS-B数据集来汇编的。本文讨论了使用这种简单方法汇编瑞典评价基准的潜在问题,包括翻译错误、词汇变异和生产性复合。 尽管由此产生的数据集存在一些明显问题,但我们使用该基准比较了瑞典现有大部分文本表述,表明本地模型的多语种模型比多语种模型要好,简单词汇包的功能也很好。