There has been an influx of biomedical domain-specific language models, showing language models pre-trained on biomedical text perform better on biomedical domain benchmarks than those trained on general domain text corpora such as Wikipedia and Books. Yet, most works do not study the factors affecting each domain language application deeply. Additionally, the study of model size on domain-specific models has been mostly missing. We empirically study and evaluate several factors that can affect performance on domain language applications, such as the sub-word vocabulary set, model size, pre-training corpus, and domain transfer. We show consistent improvements on benchmarks with our larger BioMegatron model trained on a larger domain corpus, contributing to our understanding of domain language model applications. We demonstrate noticeable improvements over the previous state-of-the-art (SOTA) on standard biomedical NLP benchmarks of named entity recognition, relation extraction, and question answering. Model checkpoints and code are available at [ngc.nvidia.com] and [github.com/NVIDIA/NeMo].
翻译:生物医学特定领域的语言模式大量涌现,表明在生物医学文本方面经过预先培训的语言模式在生物医学领域基准方面比在一般领域文本公司,例如维基百科和书籍方面受过培训的语言模式在生物医学领域基准方面表现更好;然而,大多数工作并没有深入研究影响每个领域语言应用的因素;此外,对特定领域模式的模型规模的研究大多缺乏;我们从经验上研究和评价了可能影响域语言应用绩效的若干因素,如次级字词词汇集、模式规模、培训前材料库和域域转让;我们用在较大领域领域物质上受过培训的更大的生物气象模型在基准方面表现出一致的改进,有助于我们对域语言模型应用的理解;我们展示了以前在标准生物医学国家生物医学国家专利标准标准标准标准标准标准标准标准、名称实体识别、关系提取和问题回答方面的显著改进;[nc.nvidia.com]和[github.com/NVIDIA/Nemmo]提供了示范检查站和代码。