Although recent advances in scaling large language models (LLMs) have resulted in improvements on many NLP tasks, it remains unclear whether these models trained primarily with general web text are the right tool in highly specialized, safety critical domains such as clinical text. Recent results have suggested that LLMs encode a surprising amount of medical knowledge. This raises an important question regarding the utility of smaller domain-specific language models. With the success of general-domain LLMs, is there still a need for specialized clinical models? To investigate this question, we conduct an extensive empirical analysis of 12 language models, ranging from 220M to 175B parameters, measuring their performance on 3 different clinical tasks that test their ability to parse and reason over electronic health records. As part of our experiments, we train T5-Base and T5-Large models from scratch on clinical notes from MIMIC III and IV to directly investigate the efficiency of clinical tokens. We show that relatively small specialized clinical models substantially outperform all in-context learning approaches, even when finetuned on limited annotated data. Further, we find that pretraining on clinical tokens allows for smaller, more parameter-efficient models that either match or outperform much larger language models trained on general text. We release the code and the models used under the PhysioNet Credentialed Health Data license and data use agreement.
翻译:尽管在扩大大型语言模型(LLMS)方面最近取得的进展导致许多全国语言项目任务的改进,但这些主要以一般网络文本培训的模型是否是高度专业化和安全关键领域(如临床文本)的正确工具,仍然不清楚。最近的结果表明,LLMS将医疗知识编译成数量惊人的医学知识。这提出了关于较小领域特定语言模型的效用的重要问题。由于通用临床模型的成功,仍然需要专门的临床模型吗?为了调查这一问题,我们对12种语言模型进行了广泛的实验性分析,范围从220M到175B参数不等,衡量其3种不同临床任务的业绩,以测试其对电子健康记录进行剖析和理性的能力。作为我们实验的一部分,我们培训T5-Base和T5-Large模型,利用MIMIC III和IV的临床笔记,从零开始直接调查临床象征的效率。我们发现,相对小的专门临床模型大大超出所有的文字学习方法,即使对有限的附加说明的数据进行微调。此外,我们发现,临床标志培训前的临床标志可以使经过培训的模型更小、更精细化、节制的模型与经过训练的C型或超大版数据协议。我们根据已用过的健康数据库。