Biomedical named entity recognition (NER) is a key task in the extraction of information from biomedical literature and electronic health records. For this task, both generic and biomedical BERT models are widely used. Robustness of these models is vital for medical applications, such as automated medical decision making. In this paper we investigate the vulnerability of BERT models to variation in input data for NER through adversarial attack. Since adversarial attack methods for NER are sparse, we propose two black-box methods for NER based on existing methods for classification tasks. Experimental results show that the original as well as the biomedical BERT models are highly vulnerable to entity replacement: They can be fooled in 89.2 to 99.4% of the cases to mislabel previously correct entities. BERT models are also vulnerable to variation in the entity context with 20.2 to 45.0% of entities predicted completely wrong and another 29.3 to 53.3% of entities predicted wrong partially. Often a single change is sufficient to fool the model. BERT models seem most vulnerable to changes in the local context of entities. Of the biomedical BERT models, the vulnerability of BioBERT is comparable to the original BERT model whereas SciBERT is even more vulnerable. Our results chart the vulnerabilities of BERT models for biomedical NER and emphasize the importance of further research into uncovering and reducing these weaknesses.
翻译:生物医学实体识别(NER)是从生物医学文献和电子健康记录中提取信息的一项关键任务。关于这项任务,广泛使用通用和生物医学生物生物信息交换模型模型。这些模型的坚固性对于医疗应用至关重要,例如自动化医疗决策。在本文件中,我们调查生物医学信息交换模型在通过对抗性攻击改变 NER 输入数据方面的脆弱性。由于NER 的对抗性攻击方法稀少,我们提议两种基于现有分类任务方法的NER黑箱方法。实验结果表明,原始和生物医学生物信息交换模型极易受实体替换的伤害:在89.2%至99.4%的案例中,这些模型可能被误贴上先前正确实体的标签。在实体背景下,BERT模型也容易出现变化,20.2%至45.0%的实体预测完全错误,另外29.3%的实体预测部分错误。由于对NER的对抗性攻击方法稀少,我们提出的两种方法都足以愚弄模型。BERT模型似乎最易受到当地实体环境变化的影响。在生物医学生物伦理生物信息交换模型中,BERETERT的脆弱程度可与原始模型相比,而我们的BERERT的弱点则会进一步降低BER的弱点。