Background: As large language models (LLMs) become increasingly integrated into digital health education and assessment workflows, their capabilities in supporting high-stakes, domain-specific certification tasks remain underexplored.In China, the national pharmacist licensure exam serves as a standardized benchmark for evaluating pharmacists' clinical and theoretical competencies. Objective: This study aimed to compare the performance of two LLMs: ChatGPT-4o and DeepSeek-R1 on real questions from the Chinese Pharmacist Licensing Examination (2017-2021), and to discuss the implications of these performance differences for AI-enabled formative evaluation. Methods: A total of 2,306 multiple-choice (text-only) questions were compiled from official exams, training materials, and public databases. Questions containing tables or images were excluded. Each item was input in its original Chinese format, and model responses were evaluated for exact accuracy. Pearson's Chi-squared test was used to compare overall performance, and Fisher's exact test was applied to year-wise multiple-choice accuracy. Results: DeepSeek-R1 outperformed ChatGPT-4o with a significantly higher overall accuracy (90.0% vs. 76.1%, p < 0.001). Unit-level analyses revealed consistent advantages for DeepSeek-R1, particularly in foundational and clinical synthesis modules. While year-by-year multiple-choice performance also favored DeepSeek-R1, this performance gap did not reach statistical significance in any specific unit-year (all p > 0.05). Conclusion: DeepSeek-R1 demonstrated robust alignment with the structural and semantic demands of the pharmacist licensure exam. These findings suggest that domain-specific models warrant further investigation for this context, while also reinforcing the necessity of human oversight in legally and ethically sensitive contexts.
翻译:背景:随着大语言模型(LLMs)日益融入数字健康教育与评估工作流程,其在支持高风险、特定领域认证任务方面的能力仍待深入探究。在中国,全国执业药师资格考试是评估药师临床与理论能力的标准化基准。目的:本研究旨在比较两种大语言模型——ChatGPT-4o与DeepSeek-R1——在中国执业药师资格考试(2017-2021年)真实试题上的表现,并探讨这些性能差异对人工智能辅助形成性评价的启示。方法:从官方考试、培训资料及公共数据库中收集了总计2,306道纯文本选择题,排除含表格或图像的题目。每道题均以原始中文格式输入,并对模型响应的精确准确率进行评估。采用皮尔逊卡方检验比较整体表现,并应用费希尔精确检验进行逐年选择题准确率分析。结果:DeepSeek-R1整体准确率显著高于ChatGPT-4o(90.0%对比76.1%,p < 0.001)。单元层面分析显示DeepSeek-R1具有持续优势,尤其在基础与临床综合模块。尽管逐年选择题表现亦倾向于DeepSeek-R1,但该性能差距在任一特定单元-年份组合中均未达到统计学显著性(所有p > 0.05)。结论:DeepSeek-R1展现出与执业药师资格考试结构及语义需求的高度契合。这些发现表明,针对特定领域的模型值得在此情境下进一步研究,同时也强调了在法律与伦理敏感场景中保持人工监督的必要性。