Test automation is common in software development; often one tests repeatedly to identify regressions. If the amount of test cases is large, one may select a subset and only use the most important test cases. The regression test selection (RTS) could be automated and enhanced with Artificial Intelligence (AI-RTS). This however could introduce ethical challenges. While such challenges in AI are in general well studied, there is a gap with respect to ethical AI-RTS. By exploring the literature and learning from our experiences of developing an industry AI-RTS tool, we contribute to the literature by identifying three challenges (assigning responsibility, bias in decision-making and lack of participation) and three approaches (explicability, supervision and diversity). Additionally, we provide a checklist for ethical AI-RTS to help guide the decision-making of the stakeholders involved in the process.
翻译:测试自动化在软件开发中很常见;常常反复进行一次测试,以确定回归情况;如果测试案例数量很大,可以选择一个子集,只使用最重要的测试案例;回归测试选择可以与人工智能(AI-RTS)自动化并得到加强;然而,这可能会带来道德挑战;虽然对AI的这类挑战一般都进行了认真研究,但在道德的AI-RTS方面存在着差距。通过探讨文献和学习我们开发行业AI-RTS工具的经验,我们通过查明三项挑战(分配责任、决策偏向和缺乏参与)和三种方法(易用性、监督和多样性),为文献作出贡献。此外,我们为AI-RTS提供了一份道德检查清单,以帮助指导参与这一过程的利益攸关方的决策。