AI scientist systems, capable of autonomously executing the full research workflow from hypothesis generation and experimentation to paper writing, hold significant potential for accelerating scientific discovery. However, the internal workflow of these systems have not been closely examined. This lack of scrutiny poses a risk of introducing flaws that could undermine the integrity, reliability, and trustworthiness of their research outputs. In this paper, we identify four potential failure modes in contemporary AI scientist systems: inappropriate benchmark selection, data leakage, metric misuse, and post-hoc selection bias. To examine these risks, we design controlled experiments that isolate each failure mode while addressing challenges unique to evaluating AI scientist systems. Our assessment of two prominent open-source AI scientist systems reveals the presence of several failures, across a spectrum of severity, which can be easily overlooked in practice. Finally, we demonstrate that access to trace logs and code from the full automated workflow enables far more effective detection of such failures than examining the final paper alone. We thus recommend journals and conferences evaluating AI-generated research to mandate submission of these artifacts alongside the paper to ensure transparency, accountability, and reproducibility.
翻译:能够自主执行从假设生成、实验到论文撰写的完整研究流程的AI科学家系统,在加速科学发现方面具有巨大潜力。然而,这些系统的内部工作流程尚未得到深入检视。这种审视的缺失可能导致引入缺陷,从而损害其研究成果的完整性、可靠性与可信度。本文识别了当代AI科学家系统中四种潜在的失效模式:不恰当的基准选择、数据泄露、指标误用以及事后选择偏差。为探究这些风险,我们设计了对照实验以隔离每种失效模式,同时应对评估AI科学家系统所特有的挑战。我们对两个主流的开源AI科学家系统的评估表明,实践中极易忽略多种不同严重程度的失效。最后,我们证明,相较于仅审查最终论文,获取完整自动化工作流程的追踪日志与代码能更有效地检测此类失效。因此,我们建议评估AI生成研究的期刊与会议,要求作者在提交论文时一并提交这些过程文件,以确保透明度、可问责性与可复现性。