Generating tests that can reveal performance issues in large and complex software systems within a reasonable amount of time is a challenging task. On one hand, there are numerous combinations of input data values to explore. On the other hand, we have a limited test budget to execute tests. What makes this task even more difficult is the lack of access to source code and the internal details of these systems. In this paper, we present an automated test generation method called ACTA for black-box performance testing. ACTA is based on active learning, which means that it does not require a large set of historical test data to learn about the performance characteristics of the system under test. Instead, it dynamically chooses the tests to execute using uncertainty sampling. ACTA relies on a conditional variant of generative adversarial networks,and facilitates specifying performance requirements in terms of conditions and generating tests that address those conditions.We have evaluated ACTA on a benchmark web application, and the experimental results indicate that this method is comparable with random testing, and two other machine learning methods,i.e. PerfXRL and DN.
翻译:在一个合理的时间范围内,在大型和复杂的软件系统中进行能够揭示性能问题的测试是一项艰巨的任务。一方面,需要探索大量输入数据值的组合。另一方面,我们有一个有限的测试预算来进行测试。使得这项任务更加困难的是缺乏对源代码和这些系统内部细节的访问。在本文中,我们提出了一个自动测试生成方法,称为ACTA,用于黑盒性能测试。ACTA基于积极学习,这意味着它不需要大量的历史测试数据来了解正在测试的系统性能特征。相反,它动态地选择使用不确定性抽样进行测试。ACTA依靠一个有条件的变体,即基因对抗网络,便利在条件方面具体规定性能要求,并生成符合这些条件的测试。我们用基准网络应用程序对ACTA进行了评估,实验结果表明,这种方法与随机测试和其他两种机器学习方法(即 PerfXRL和DN)相似。