The applications of Artificial Intelligence (AI) methods especially machine learning techniques have increased in recent years. Classification algorithms have been successfully applied to different problems such as requirement classification. Although these algorithms have good performance, most of them cannot explain how they make a decision. Explainable Artificial Intelligence (XAI) is a set of new techniques that explain the predictions of machine learning algorithms. In this work, the applicability of XAI for software requirement classification is studied. An explainable software requirement classifier is presented using the LIME algorithm. The explainability of the proposed method is studied by applying it to the PROMISE software requirement dataset. The results show that XAI can help the analyst or requirement specifier to better understand why a specific requirement is classified as functional or non-functional. The important keywords for such decisions are identified and analyzed in detail. The experimental study shows that the XAI can be used to help analysts and requirement specifiers to better understand the predictions of the classifiers for categorizing software requirements. Also, the effect of the XAI on feature reduction is analyzed. The results showed that the XAI model has a positive role in feature analysis.
翻译:近年来,人工智能(AI)方法的应用,特别是机器学习技术的应用有所增加。分类算法成功地应用于不同问题,如要求分类等。虽然这些算法表现良好,但大多数无法解释它们是如何作出决定的。可解释的人工智能(XAI)是一套解释机器学习算法预测的新技术。在这项工作中,研究了XAI对软件要求分类的适用性。使用LIME算法提出了可解释的软件要求分类法。通过将拟议方法应用于PROMISIE软件要求数据集来研究该方法的可解释性。结果显示,XAI可以帮助分析者或要求说明者更好地了解为什么某项具体要求被归类为功能性或非功能性。这些决定的重要关键词得到了详细的确定和分析。实验研究表明,XAI可以用来帮助分析者和要求分解者更好地了解分类软件要求的预测。此外,XAI对特性缩减的影响也得到了分析。结果表明,XAI模型在特征分析中具有积极的作用。