Performance testing with the aim of generating an efficient and effective workload to identify performance issues is challenging. Many of the automated approaches mainly rely on analyzing system models, source code, or extracting the usage pattern of the system during the execution. However, such information and artifacts are not always available. Moreover, all the transactions within a generated workload do not impact the performance of the system the same way, a finely tuned workload could accomplish the test objective in an efficient way. Model-free reinforcement learning is widely used for finding the optimal behavior to accomplish an objective in many decision-making problems without relying on a model of the system. This paper proposes that if the optimal policy (way) for generating test workload to meet a test objective can be learned by a test agent, then efficient test automation would be possible without relying on system models or source code. We present a self-adaptive reinforcement learning-driven load testing agent, RELOAD, that learns the optimal policy for test workload generation and generates an effective workload efficiently to meet the test objective. Once the agent learns the optimal policy, it can reuse the learned policy in subsequent testing activities. Our experiments show that the proposed intelligent load test agent can accomplish the test objective with lower test cost compared to common load testing procedures, and results in higher test efficiency.
翻译:许多自动化方法主要依靠分析系统模型、源代码或抽取系统在执行期间的使用情况模式,然而,这种信息和工艺品并非总能获得;此外,所产生的工作量中的所有交易并不同样影响系统的业绩,微调的工作量可以高效地完成测试目标;不设模型的强化学习被广泛用于寻找最佳行为,以便在许多决策问题中实现一个目标,而不必依赖系统模型;本文提议,如果一个测试代理人能够学习产生测试工作量以达到测试目标的最佳政策(途径),那么在不依赖系统模型或源代码的情况下,就有可能实现高效的测试自动化。我们提出了一个自我调整的强化学习驱动的负载测试代理,即RELOAD,该代理可以学习测试工作量生成的最佳政策,并创造有效的工作量,从而达到测试目标。一旦该代理人学习了最佳政策,就可以在随后的测试活动中重新利用所学习的政策。我们的实验表明,拟议的智能载荷测试程序可以比较低的目标测试结果。