In this paper, we explore possible improvements of transformer models in a low-resource setting. In particular, we present our approaches to tackle the first two of three subtasks of the MEDDOPROF competition, i.e., the extraction and classification of job expressions in Spanish clinical texts. As neither language nor domain experts, we experiment with the multilingual XLM-R transformer model and tackle these low-resource information extraction tasks as sequence-labeling problems. We explore domain- and language-adaptive pretraining, transfer learning and strategic datasplits to boost the transformer model. Our results show strong improvements using these methods by up to 5.3 F1 points compared to a fine-tuned XLM-R model. Our best models achieve 83.2 and 79.3 F1 for the first two tasks, respectively.
翻译:在本文中,我们探讨在低资源环境下改进变压器模型的可能性,特别是介绍我们处理MEDDOPROF竞争的三个子任务中前两个子任务的方法,即西班牙临床文本中职务表达方式的提取和分类。我们既不是语言专家,也不是域专家,我们试验多语言XLM-R变压器模型,并将这些低资源信息提取任务作为序列标签问题处理。我们探索域和语言适应性培训前培训、转移学习和战略数据分割,以提升变压器模型。我们的结果显示,与精心调整的 XLM-R 模型相比,使用这些方法取得了5.3个F1点的显著改进。我们的最佳模型在前两个任务中分别实现了83.2和79.3 F1。