Test suites tend to grow when software evolves, making it often infeasible to execute all test cases with the allocated testing budgets, especially for large software systems. Therefore, test suite minimization (TSM) is employed to improve the efficiency of software testing by removing redundant test cases, thus reducing testing time and resources, while maintaining the fault detection capability of the test suite. Most of the TSM approaches rely on code coverage (white-box) or model-based features, which are not always available for test engineers. Recent TSM approaches that rely only on test code (black-box) have been proposed, such as ATM and FAST-R. To address scalability, we propose LTM (Language model-based Test suite Minimization), a novel, scalable, and black-box similarity-based TSM approach based on large language models (LLMs). To support similarity measurement, we investigated three different pre-trained language models: CodeBERT, GraphCodeBERT, and UniXcoder, to extract embeddings of test code, on which we computed two similarity measures: Cosine Similarity and Euclidean Distance. Our goal is to find similarity measures that are not only computationally more efficient but can also better guide a Genetic Algorithm (GA), thus reducing the overall search time. Experimental results, under a 50% minimization budget, showed that the best configuration of LTM (using UniXcoder with Cosine similarity) outperformed the best two configurations of ATM in three key facets: (a) achieving a greater saving rate of testing time (40.38% versus 38.06%, on average); (b) attaining a significantly higher fault detection rate (0.84 versus 0.81, on average); and, more importantly, (c) minimizing test suites much faster (26.73 minutes versus 72.75 minutes, on average) in terms of both preparation time (up to two orders of magnitude faster) and search time (one order of magnitude faster).
翻译:暂无翻译