The rise of Large Language Models (LLMs) as coding agents promises to accelerate software development, but their impact on generated code reproducibility remains largely unexplored. This paper presents an empirical study investigating whether LLM-generated code can be executed successfully in a clean environment with only OS packages and using only the dependencies that the model specifies. We evaluate three state-of-the-art LLM coding agents (Claude Code, OpenAI Codex, and Gemini) across 300 projects generated from 100 standardized prompts in Python, JavaScript, and Java. We introduce a three-layer dependency framework (distinguishing between claimed, working, and runtime dependencies) to quantify execution reproducibility. Our results show that only 68.3% of projects execute out-of-the-box, with substantial variation across languages (Python 89.2%, Java 44.0%). We also find a 13.5 times average expansion from declared to actual runtime dependencies, revealing significant hidden dependencies.
翻译:大型语言模型(LLM)作为编码代理的兴起有望加速软件开发,但其对生成代码可复现性的影响在很大程度上仍未得到探索。本文通过一项实证研究,探究LLM生成的代码是否能在仅包含操作系统包且仅使用模型指定依赖的干净环境中成功执行。我们在Python、JavaScript和Java三种语言中,基于100个标准化提示生成的300个项目上,评估了三种最先进的LLM编码代理(Claude Code、OpenAI Codex和Gemini)。我们引入了一个三层依赖框架(区分声明依赖、工作依赖和运行时依赖)以量化执行可复现性。研究结果表明,仅有68.3%的项目能够开箱即执行,且不同语言间差异显著(Python 89.2%,Java 44.0%)。我们还发现从声明依赖到实际运行时依赖平均膨胀了13.5倍,揭示了大量隐藏依赖。