Artificially intelligent (AI) co-scientists must be able to sift through research literature cost-efficiently while applying nuanced scientific reasoning. We evaluate Small Language Models (SLMs, <= 8B parameters) for classifying medical research papers. Using literature on the oncogenic potential of HMTV/MMTV-like viruses in breast cancer as a case study, we assess model performance with both zero-shot and in-context learning (ICL; few-shot prompting) strategies against frontier proprietary Large Language Models (LLMs). Llama 3 and Qwen2.5 outperform GPT-5 (API, low/high effort), Gemini 3 Pro Preview, and Meerkat in zero-shot settings, though trailing Gemini 2.5 Pro. ICL leads to improved performance on a case-by-case basis, allowing Llama 3 and Qwen2.5 to match Gemini 2.5 Pro in binary classification. Systematic lexical-ablation experiments show that SLM decisions are often grounded in valid scientific cues but can be influenced by spurious textual artifacts, underscoring need for interpretability in high-stakes pipelines. Our results reveal both promise and limitations of modern SLMs for scientific triage; pairing SLMs with simple but principled prompting strategies can approach performance of the strongest LLMs for targeted literature filtering in co-scientist pipelines.
翻译:人工智能(AI)协作科学家必须能够以成本高效的方式筛选研究文献,同时运用细致的科学推理。我们评估了小语言模型(SLMs,参数≤8B)在医学研究论文分类中的应用。以关于HMTV/MMTV样病毒在乳腺癌中致癌潜力的文献为案例研究,我们通过零样本学习和上下文学习(ICL;少样本提示)策略,将模型性能与前沿的专有大语言模型(LLMs)进行比较。在零样本设置中,Llama 3和Qwen2.5的表现优于GPT-5(API,低/高努力)、Gemini 3 Pro Preview和Meerkat,但略逊于Gemini 2.5 Pro。上下文学习在具体案例中带来了性能提升,使Llama 3和Qwen2.5在二元分类任务中能够匹配Gemini 2.5 Pro的水平。系统的词汇消融实验表明,SLM的决策通常基于有效的科学线索,但也可能受到虚假文本特征的影响,这突显了高风险流程中可解释性的必要性。我们的结果揭示了现代SLMs在科学筛选中的潜力与局限性;将SLMs与简单但原则性的提示策略相结合,可以在协作科学家流程中实现接近最强LLMs的针对性文献过滤性能。