Pre-trained language models have attracted increasing attention in the biomedical domain, inspired by their great success in the general natural language domain. Among the two main branches of pre-trained language models in the general language domain, i.e., BERT (and its variants) and GPT (and its variants), the first one has been extensively studied in the biomedical domain, such as BioBERT and PubMedBERT. While they have achieved great success on a variety of discriminative downstream biomedical tasks, the lack of generation ability constrains their application scope. In this paper, we propose BioGPT, a domain-specific generative Transformer language model pre-trained on large scale biomedical literature. We evaluate BioGPT on six biomedical NLP tasks and demonstrate that our model outperforms previous models on most tasks. Especially, we get 44.98%, 38.42% and 40.76% F1 score on BC5CDR, KD-DTI and DDI end-to-end relation extraction tasks respectively, and 78.2% accuracy on PubMedQA, creating a new record. Our larger model BioGPT-Large achieves 81.0% on PubMedQA. Our case study on text generation further demonstrates the advantage of BioGPT on biomedical literature to generate fluent descriptions for biomedical terms. Code is available at https://github.com/microsoft/BioGPT.
翻译:在一般自然语言领域的伟大成功激励下,预先培训的语言模式在生物医学领域引起了越来越多的注意。在一般语言领域,即BERT(及其变种)和GPT(及其变种)这两个经过培训的先培训的语言模式主要分支中,第一个在生物医学领域,如BioBERT和PubMedBERT等,已经进行了广泛的研究。虽然在一系列歧视性的下游生物医学任务方面取得了巨大成功,但缺乏发电能力限制了其应用范围。在本文中,我们提出BioGPT,这是在大规模生物医学文献方面预先培训的域特有的基因变异语言模式。我们评估生物基因变异模式的六种NLPT(及其变种)和GPT(及其变种),并表明我们的模型在多数任务方面比以前的模式要强。特别是,我们在BC5CDRDR、KD-DTI和DDI端端至端关系提取任务上分别取得了44.42%的得分数,在PUBMQA上提出了78.2%的精确度,创造了新的记录。我们在BioG-LBT-LOA ExBA上,我们在BA/LODODRBQ 的成绩上进一步展示了我们BDUBSB的成绩的成绩。