Large language models (LLMs) have demonstrated impressive capabilities in natural language understanding and generation, but the quality bar for medical and clinical applications is high. Today, attempts to assess models' clinical knowledge typically rely on automated evaluations on limited benchmarks. There is no standard to evaluate model predictions and reasoning across a breadth of tasks. To address this, we present MultiMedQA, a benchmark combining six existing open question answering datasets spanning professional medical exams, research, and consumer queries; and HealthSearchQA, a new free-response dataset of medical questions searched online. We propose a framework for human evaluation of model answers along multiple axes including factuality, precision, possible harm, and bias. In addition, we evaluate PaLM (a 540-billion parameter LLM) and its instruction-tuned variant, Flan-PaLM, on MultiMedQA. Using a combination of prompting strategies, Flan-PaLM achieves state-of-the-art accuracy on every MultiMedQA multiple-choice dataset (MedQA, MedMCQA, PubMedQA, MMLU clinical topics), including 67.6% accuracy on MedQA (US Medical License Exam questions), surpassing prior state-of-the-art by over 17%. However, human evaluation reveals key gaps in Flan-PaLM responses. To resolve this we introduce instruction prompt tuning, a parameter-efficient approach for aligning LLMs to new domains using a few exemplars. The resulting model, Med-PaLM, performs encouragingly, but remains inferior to clinicians. We show that comprehension, recall of knowledge, and medical reasoning improve with model scale and instruction prompt tuning, suggesting the potential utility of LLMs in medicine. Our human evaluations reveal important limitations of today's models, reinforcing the importance of both evaluation frameworks and method development in creating safe, helpful LLM models for clinical applications.
翻译:大型语言模型(LLMS)在自然语言理解和生成方面表现出了令人印象深刻的能力,但医疗和临床应用的质量却很高。今天,评估模型临床知识的尝试通常依赖于有限的基准的自动化评估。没有标准来评价模型预测和跨任务范围推理。为此,我们提出MultiMedQA,这是将六个现有开放式解答数据集(包括专业体检、研究和消费者问询)结合在一起的基准;以及HealthSearchQA,这是一套网上搜索的医疗问题自由反应数据集。我们提出了一个框架,用于对多种轴的模型答案进行人文评估,包括事实质量、精确度、精确度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、精度、