The field of natural language processing (NLP) has recently seen a large change towards using pre-trained language models for solving almost any task. Despite showing great improvements in benchmark datasets for various tasks, these models often perform sub-optimal in non-standard domains like the clinical domain where a large gap between pre-training documents and target documents is observed. In this paper, we aim at closing this gap with domain-specific training of the language model and we investigate its effect on a diverse set of downstream tasks and settings. We introduce the pre-trained CLIN-X (Clinical XLM-R) language models and show how CLIN-X outperforms other pre-trained transformer models by a large margin for ten clinical concept extraction tasks from two languages. In addition, we demonstrate how the transformer model can be further improved with our proposed task- and language-agnostic model architecture based on ensembles over random splits and cross-sentence context. Our studies in low-resource and transfer settings reveal stable model performance despite a lack of annotated data with improvements of up to 47 F1points when only 250 labeled sentences are available. Our results highlight the importance of specialized language models as CLIN-X for concept extraction in non-standard domains, but also show that our task-agnostic model architecture is robust across the tested tasks and languages so that domain- or task-specific adaptations are not required. The CLIN-Xlanguage models and source code for fine-tuning and transferring the model are publicly available at https://github.com/boschresearch/clin\_x/ and the huggingface model hub.
翻译:自然语言处理领域(NLP)最近出现了一个巨大的变化,转向使用预先培训的语言模型来解决几乎任何任务。尽管在各种任务的基准数据集方面有了很大的改进,但这些模型往往在临床领域,如临床领域等非标准领域执行亚最佳性,例如,在临床领域观察到培训前文件和目标文件之间的巨大差距。在本文件中,我们的目标是缩小对语言模型进行具体区域培训的这一差距,并调查其对多种下游任务和设置的影响。我们引入了预先培训的 CLIN-X (临床 XLM-R) 语言模型,并展示了CLIN-X 中心如何通过从两种语言提取10项临床概念任务的大差额,将其他经过事先培训的变异模式模型化成其他变异模型。此外,我们展示了如何通过我们基于随机分割和交叉背景组合的拟议任务和语言分析模型架构来进一步改进变异源模型和变异源模型的运行情况。我们关于低语言/变异源模型的研究结果显示稳定,尽管缺少经过了47种F1点的改进,而在仅有250个专业模型的C-LX 标签/Serveal化任务中也显示我们的数据。