Foundation models (FMs), particularly large language models (LLMs), have shown significant promise in various software engineering (SE) tasks, including code generation, debugging, and requirement refinement. Despite these advances, existing evaluation frameworks are insufficient for assessing model performance in iterative, context-rich workflows characteristic of SE activities. To address this limitation, we introduce \emph{SWE-Arena}, an interactive platform designed to evaluate FMs in SE tasks. SWE-Arena provides a transparent, open-source leaderboard, supports multi-round conversational workflows, and enables end-to-end model comparisons. The platform introduces novel metrics, including \emph{model consistency score} that measures the consistency of model outputs through self-play matches, and \emph{conversation efficiency index} that evaluates model performance while accounting for the number of interaction rounds required to reach conclusions. Moreover, SWE-Arena incorporates a new feature called \emph{RepoChat}, which automatically injects repository-related context (e.g., issues, commits, pull requests) into the conversation, further aligning evaluations with real-world development processes. This paper outlines the design and capabilities of SWE-Arena, emphasizing its potential to advance the evaluation and practical application of FMs in software engineering.
翻译:基础模型(FMs),特别是大语言模型(LLMs),在包括代码生成、调试和需求细化在内的多种软件工程(SE)任务中展现出巨大潜力。尽管取得了这些进展,现有的评估框架仍不足以评估模型在软件工程活动所特有的迭代式、上下文丰富的工作流程中的表现。为弥补这一不足,我们引入了 \emph{SWE-Arena},一个专为评估软件工程任务中的基础模型而设计的交互式平台。SWE-Arena 提供了一个透明、开源的排行榜,支持多轮对话工作流,并实现端到端的模型比较。该平台引入了新颖的指标,包括通过自对弈匹配来衡量模型输出一致性的 \emph{模型一致性分数},以及结合达成结论所需交互轮次来评估模型性能的 \emph{对话效率指数}。此外,SWE-Arena 整合了一个名为 \emph{RepoChat} 的新功能,它能自动将与代码仓库相关的上下文(例如问题、提交、拉取请求)注入对话中,使评估进一步贴近真实的开发流程。本文概述了 SWE-Arena 的设计与功能,强调了其在推动基础模型于软件工程领域的评估与实际应用方面的潜力。