The degree of semantic relatedness of two units of language has long been considered fundamental to understanding meaning. Additionally, automatically determining relatedness has many applications such as question answering and summarization. However, prior NLP work has largely focused on semantic similarity, a subset of relatedness, because of a lack of relatedness datasets. In this paper, we introduce a dataset for Semantic Textual Relatedness, STR-2022, that has 5,500 English sentence pairs manually annotated using a comparative annotation framework, resulting in fine-grained scores. We show that human intuition regarding relatedness of sentence pairs is highly reliable, with a repeat annotation correlation of 0.84. We use the dataset to explore questions on what makes sentences semantically related. We also show the utility of STR-2022 for evaluating automatic methods of sentence representation and for various downstream NLP tasks. Our dataset, data statement, and annotation questionnaire can be found at: https://doi.org/10.5281/zenodo.7599667
翻译:两个语言单元之间的语义相关度长期以来被认为是理解意义的基础。此外,自动确定相关性在问答和摘要等方面有很多应用。然而,由于缺乏相关性数据集,先前的自然语言处理工作主要集中在语义相似性的研究上。在本文中,我们介绍了一个语义文本相关性数据集 STR-2022,其中包含 5,500 个用英语写的句子对,采用比较注释框架进行了手动注释,得到了细粒度的分数。我们表明,人类对句子对相关性的直觉是非常可靠的,重复注释的相关性为 0.84。我们使用数据集探讨了有关什么使句子在语义上相关的问题。我们还展示了 STR-2022 在评估句子表示自动方法和各种下游自然语言处理任务方面的效用。我们的数据集、数据声明和注释问卷可以在以下网址找到:https://doi.org/10.5281/zenodo.7599667