Artificial Intelligence/Machine Learning techniques have been widely used in software engineering to improve developer productivity, the quality of software systems, and decision-making. However, such AI/ML models for software engineering are still impractical, not explainable, and not actionable. These concerns often hinder the adoption of AI/ML models in software engineering practices. In this article, we first highlight the need for explainable AI in software engineering. Then, we summarize three successful case studies on how explainable AI techniques can be used to address the aforementioned challenges by making software defect prediction models more practical, explainable, and actionable.
翻译:人工智能/海洋学学习技术在软件工程中被广泛使用,以提高开发商的生产率、软件系统的质量和决策,然而,这种软件工程的AI/ML模型仍然不切实际,无法解释,也无法操作,这些关切往往妨碍在软件工程实践中采用AI/ML模型,在本条中,我们首先强调软件工程中需要可解释的AI。然后,我们总结了三个成功的案例研究,即如何通过使软件缺陷预测模型更加实用、可解释和可操作,利用可解释的AI技术应对上述挑战。