In recent years, pre-trained language models (PLMs) achieve the best performance on a wide range of natural language processing (NLP) tasks. While the first models were trained on general domain data, specialized ones have emerged to more effectively treat specific domains. In this paper, we propose an original study of PLMs in the medical domain on French language. We compare, for the first time, the performance of PLMs trained on both public data from the web and private data from healthcare establishments. We also evaluate different learning strategies on a set of biomedical tasks. In particular, we show that we can take advantage of already existing biomedical PLMs in a foreign language by further pre-train it on our targeted data. Finally, we release the first specialized PLMs for the biomedical field in French, called DrBERT, as well as the largest corpus of medical data under free license on which these models are trained.
翻译:近年来,预先训练的语言模型在各种自然语言处理领域中取得了最佳表现。虽然最初的模型是在通用领域数据上训练的,但专门领域的模型已经出现,以更有效地处理特定领域的任务。在本文中,我们提出了对医学领域中的预先训练语言模型的原始研究,重点研究法语。我们首次比较了在公共的网络数据和医疗机构的私人数据上训练的预先训练语言模型的性能。我们还在一组生物医学任务上评估了不同的学习策略。特别是,我们展示了我们可以利用外语中已有的生物医学预训练模型,通过在目标数据上进一步预训练它。最后,我们发布了面向法语生物医学领域的首个专业预先训练模型,称为DrBERT,以及这些模型训练的最大的医学数据语料库,其可自由使用许可证。