Modal verbs (e.g., "can", "should", or "must") occur highly frequently in scientific articles. Decoding their function is not straightforward: they are often used for hedging, but they may also denote abilities and restrictions. Understanding their meaning is important for various NLP tasks such as writing assistance or accurate information extraction from scientific text. To foster research on the usage of modals in this genre, we introduce the MIST (Modals In Scientific Text) dataset, which contains 3737 modal instances in five scientific domains annotated for their semantic, pragmatic, or rhetorical function. We systematically evaluate a set of competitive neural architectures on MIST. Transfer experiments reveal that leveraging non-scientific data is of limited benefit for modeling the distinctions in MIST. Our corpus analysis provides evidence that scientific communities differ in their usage of modal verbs, yet, classifiers trained on scientific data generalize to some extent to unseen scientific domains.
翻译:摩尔动词(例如“can”、“should”或“必须”)在科学文章中频繁出现。解说其功能并非直截了当:它们经常被用于套期保值,但也可能表示能力和限制。理解其含义对于各种国家实验室任务,例如写作协助或从科学文本中准确提取信息非常重要。为了促进对这一类型中模型使用的研究,我们引入了MIST(科学文本中的模型)数据集,该数据集包含五个科学领域的3,737个模式实例,说明其语义、实用性或口头功能。我们系统地评估了一套关于MIST的竞争性神经结构。传输实验表明,利用非科学数据对模拟MIST的区别作用有限。我们的物质分析提供了证据,表明科学界在使用模型动词时有所不同,然而,经过科学数据培训的分类者在某种程度上将科学数据概括化为看不见的科学领域。