Survey papers are foundational to the scholarly progress of research communities, offering structured overviews that guide both novices and experts across disciplines. However, the recent surge of AI-generated surveys, especially enabled by large language models (LLMs), has transformed this traditionally labor-intensive genre into a low-effort, high-volume output. While such automation lowers entry barriers, it also introduces a critical threat: the phenomenon we term the "survey paper DDoS attack" to the research community. This refers to the unchecked proliferation of superficially comprehensive but often redundant, low-quality, or even hallucinated survey manuscripts, which floods preprint platforms, overwhelms researchers, and erodes trust in the scientific record. In this position paper, we argue that we must stop uploading massive amounts of AI-generated survey papers (i.e., survey paper DDoS attack) to the research community, by instituting strong norms for AI-assisted review writing. We call for restoring expert oversight and transparency in AI usage and, moreover, developing new infrastructures such as Dynamic Live Surveys, community-maintained, version-controlled repositories that blend automated updates with human curation. Through quantitative trend analysis, quality audits, and cultural impact discussion, we show that safeguarding the integrity of surveys is no longer optional but imperative to the research community.
翻译:综述论文是研究社区学术进步的基石,为跨学科的新手和专家提供结构化概述以指引研究方向。然而,近期由大语言模型(LLMs)推动的AI生成综述激增,已使这一传统上需要大量劳动的体裁转变为低投入、高产出的模式。尽管此类自动化降低了入门门槛,但也带来了严峻威胁:我们称之为研究社区的“综述论文DDoS攻击”现象。这指的是表面上全面但往往冗余、低质量甚至包含虚构内容的综述稿件无节制扩散,其泛滥于预印本平台,使研究人员不堪重负,并侵蚀科学记录的信任度。在本立场论文中,我们主张必须通过建立严格的AI辅助综述写作规范,阻止向研究社区大量上传AI生成的综述论文(即综述论文DDoS攻击)。我们呼吁恢复专家对AI使用的监督与透明度,并进一步开发新型基础设施,例如动态实时综述——这种由社区维护、版本控制的存储库能将自动化更新与人工策展相结合。通过定量趋势分析、质量审查及文化影响讨论,我们表明维护综述论文的完整性已非可选,而是研究社区的当务之急。