Context: Selenium is claimed to be the most popular software test automation tool. Past academic works have mainly neglected testing tools in favor of more methodological topics. Objective: We investigated the performance of web-testing tools, to provide empirical evidence supporting choices in software test tool selection and configuration. Method: We used 4*5 factorial design to study 20 different configurations for testing a web-store. We studied 5 programming language bindings (C#, Java, Python, and Ruby for Selenium, while Watir supports Ruby only) and 4 browsers (Google Chrome, Internet Explorer, Mozilla Firefox and Opera). Performance was measured with execution time, memory usage, length of the test scripts and stability of the tests. Results: Considering all measures the best configuration was Selenium with Python language binding for Chrome. Selenium with Python bindings was the best option for all browsers. The effect size of the difference between the slowest and fastest configuration was very high (Cohens d=41.5, 91% increase in execution time). Overall Internet Explorer was the fastest browser while having the worst results in the stability. Conclusions: We recommend benchmarking tools before adopting them. Weighting of factors, e.g. how much test stability is one willing to sacrifice for faster performance, affects the decision.
翻译:时间 : 我们调查了网络测试工具的性能, 以提供实验性证据支持软件测试工具的选择和配置。 方法 : 我们用4*5因数设计研究20个不同的配置来测试一个网络商店。 我们研究了5种程序语言捆绑( C#、 Java、 Python 和Ruby for Selenium, 而Watir只支持Ruby) 和4个浏览器( Google Chrome、 Internet Explorer、 Mozilla Firefox 和 Opera) 。 目标 : 我们调查了网络测试工具的性能, 提供了支持软件测试工具选择选择和配置的实验性证据。 方法 : 我们用4*5 要素设计来研究20种不同的配置, 用于测试网络商店。 我们研究了5种程序语言捆绑是所有浏览器的最佳选择。 而Weethon绑定 。 最慢和最快的配置的效果大小非常高( Cohens d=41.5, 执行时间增加91% )。 整个互联网探索者在测试工具之前,, 最愿意选择了最慢的试测测测结果, 。