LLM-based automatic survey systems are transforming how users acquire information from the web by integrating retrieval, organization, and content synthesis into end-to-end generation pipelines. While recent works focus on developing new generation pipelines, how to evaluate such complex systems remains a significant challenge. To this end, we introduce SurveyEval, a comprehensive benchmark that evaluates automatically generated surveys across three dimensions: overall quality, outline coherence, and reference accuracy. We extend the evaluation across 7 subjects and augment the LLM-as-a-Judge framework with human references to strengthen evaluation-human alignment. Evaluation results show that while general long-text or paper-writing systems tend to produce lower-quality surveys, specialized survey-generation systems are able to deliver substantially higher-quality results. We envision SurveyEval as a scalable testbed to understand and improve automatic survey systems across diverse subjects and evaluation criteria.
翻译:基于大语言模型(LLM)的自动综述系统正通过将检索、组织与内容合成整合至端到端生成流程,变革用户从网络获取信息的方式。尽管近期研究集中于开发新型生成流程,但如何评估此类复杂系统仍面临重大挑战。为此,我们提出SurveyEval——一个从整体质量、大纲连贯性与参考文献准确性三个维度评估自动生成综述的综合性基准。我们将评估范围扩展至7个学科领域,并通过引入人工参考文献增强LLM-as-a-Judge框架,以强化评估与人类判断的一致性。评估结果表明:通用长文本或论文写作系统倾向于生成较低质量的综述,而专业化的综述生成系统能显著提供更高质量的成果。我们期望SurveyEval成为可扩展的测试平台,以促进跨学科与多评估标准的自动综述系统的理解与改进。