Despite the widespread success of self-supervised learning via masked language models (MLM), accurately capturing fine-grained semantic relationships in the biomedical domain remains a challenge. This is of paramount importance for entity-level tasks such as entity linking where the ability to model entity relations (especially synonymy) is pivotal. To address this challenge, we propose SapBERT, a pretraining scheme that self-aligns the representation space of biomedical entities. We design a scalable metric learning framework that can leverage UMLS, a massive collection of biomedical ontologies with 4M+ concepts. In contrast with previous pipeline-based hybrid systems, SapBERT offers an elegant one-model-for-all solution to the problem of medical entity linking (MEL), achieving a new state-of-the-art (SOTA) on six MEL benchmarking datasets. In the scientific domain, we achieve SOTA even without task-specific supervision. With substantial improvement over various domain-specific pretrained MLMs such as BioBERT, SciBERTand and PubMedBERT, our pretraining scheme proves to be both effective and robust.
翻译:尽管通过隐形语言模型(MLM)进行自我监督学习取得了广泛成功,准确地捕捉生物医学领域精细的语义关系仍然是一个挑战,这对于实体一级的任务至关重要,例如将实体关系(特别是同义词学)的能力建模能力建模的实体连接起来至关重要;为了应对这一挑战,我们提议SapBERT,这是一个自我调整生物医学实体代表空间的预培训计划;我们设计了一个可扩展的衡量标准学习框架,能够利用UMLS,大量收集4M+概念的生物医学肿瘤;与以前基于管道的混合系统相比,SapBERT为医疗实体连接问题提供了一个优雅的一模一样的解决方案(MEL),在6个MEL基准数据集上实现一个新的最先进的(SOTA),在科学领域,我们甚至没有具体任务的监督就实现了SOTA。随着生物与生物生物与生物伦理专家、SciBERTand和PubMERT等各种特定领域预先培训的MMs的大幅改进,我们的培训前计划证明既有效又有力又有力。