Biomedical research is growing at such an exponential pace that scientists, researchers, and practitioners are no more able to cope with the amount of published literature in the domain. The knowledge presented in the literature needs to be systematized in such a way that claims and hypotheses can be easily found, accessed, and validated. Knowledge graphs can provide such a framework for semantic knowledge representation from literature. However, in order to build a knowledge graph, it is necessary to extract knowledge as relationships between biomedical entities and normalize both entities and relationship types. In this paper, we present and compare few rule-based and machine learning-based (Naive Bayes, Random Forests as examples of traditional machine learning methods and DistilBERT, PubMedBERT, T5 and SciFive-based models as examples of modern deep learning transformers) methods for scalable relationship extraction from biomedical literature, and for the integration into the knowledge graphs. We examine how resilient are these various methods to unbalanced and fairly small datasets. Our experiments show that transformer-based models handle well both small (due to pre-training on a large dataset) and unbalanced datasets. The best performing model was the PubMedBERT-based model fine-tuned on balanced data, with a reported F1-score of 0.92. DistilBERT-based model followed with F1-score of 0.89, performing faster and with lower resource requirements. BERT-based models performed better then T5-based generative models.
翻译:生物医学研究正在以如此迅猛的速度发展,以至于科学家、研究人员和从业者无法更有能力应付该领域出版的大量文献。文献中提供的知识需要系统化,以便很容易找到、获取和验证索赔和假设。知识图表可以为文献中的语义知识代表提供这样一个框架。然而,为了构建一个知识图,有必要从生物医学实体之间的关系中提取知识,使生物医学实体之间的关系以及实体和关系类型都正常化。在本文中,我们提出并比较了少数基于规则和机器的学习基础知识(Naive Bayes、随机森林作为传统机器学习方法的范例,DistillBERT、PubMedBERT、T5和SciFive基础模型作为现代深层学习变压器的范例),以便从生物医学文献中提取可缩放关系模型,并将之纳入知识图中。我们研究了这些不同方法对不平衡和相对较小的数据集的弹性。我们的实验表明,基于变压器模型处理得既小(因为对大型模型进行了更精确的培训),也采用不均匀的TRERRER1的模型。