LLMs promise to transform unit test generation from a manual burden into an automated solution. Yet, beyond metrics such as compilability or coverage, little is known about the quality of LLM-generated tests, particularly their susceptibility to test smells, design flaws that undermine readability and maintainability. This paper presents the first multi-benchmark, large-scale analysis of test smell diffusion in LLM-generated unit tests. We contrast LLM outputs with human-written suites (as the reference for real-world practices) and SBST-generated tests from EvoSuite (as the automated baseline), disentangling whether LLMs reproduce human-like flaws or artifacts of synthetic generation. Our study draws on 20,505 class-level suites from four LLMs (GPT-3.5, GPT-4, Mistral 7B, Mixtral 8x7B), 972 method-level cases from TestBench, 14,469 EvoSuite tests, and 779,585 human-written tests from 34,635 open-source Java projects. Using two complementary detection tools (TsDetect and JNose), we analyze prevalence, co-occurrence, and correlations with software attributes and generation parameters. Results show that LLM-generated tests consistently manifest smells such as Assertion Roulette and Magic Number Test, with patterns strongly influenced by prompting strategy, context length, and model scale. Comparisons reveal overlaps with human-written tests, raising concerns of potential data leakage from training corpora while EvoSuite exhibits distinct, generator-specific flaws. These findings highlight both the promise and the risks of LLM-based test generation, and call for the design of smell-aware generation frameworks, prompt engineering strategies, and enhanced detection tools to ensure maintainable, high-quality test code.
翻译:大型语言模型(LLMs)有望将单元测试生成从人工负担转变为自动化解决方案。然而,除了可编译性或覆盖率等指标外,人们对LLM生成测试的质量知之甚少,特别是其易受测试异味(即损害可读性和可维护性的设计缺陷)影响的程度。本文首次对LLM生成单元测试中的测试异味扩散进行了多基准、大规模分析。我们将LLM输出与人工编写的测试套件(作为现实实践的参考)以及EvoSuite生成的SBST测试(作为自动化基线)进行对比,以厘清LLMs是复现了类人缺陷还是合成生成的人工痕迹。本研究基于来自四个LLM(GPT-3.5、GPT-4、Mistral 7B、Mixtral 8x7B)的20,505个类级别测试套件、来自TestBench的972个方法级别案例、14,469个EvoSuite测试,以及来自34,635个开源Java项目的779,585个人工编写测试。利用两种互补的检测工具(TsDetect和JNose),我们分析了测试异味的普遍性、共现性,及其与软件属性及生成参数的相关性。结果表明,LLM生成的测试持续表现出如Assertion Roulette和Magic Number Test等异味,其模式受提示策略、上下文长度和模型规模的显著影响。对比分析揭示了与人工编写测试的重叠,引发了对训练语料潜在数据泄露的担忧,而EvoSuite则表现出独特的、生成器特定的缺陷。这些发现凸显了基于LLM的测试生成的前景与风险,并呼吁设计异味感知的生成框架、提示工程策略和增强的检测工具,以确保可维护、高质量的测试代码。