Explainable Artificial Intelligence (XAI) has recently gained a swell of interest, as many Artificial Intelligence (AI) practitioners and developers are compelled to rationalize how such AI-based systems work. Decades back, most XAI systems were developed as knowledge-based or expert systems. These systems assumed reasoning for the technical description of an explanation, with little regard for the user's cognitive capabilities. The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding. An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback, which are essential for XAI system evaluation. To this end, we propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding. In this regard, we adopt Bloom's taxonomy, a widely accepted model for assessing the user's cognitive capability. We utilize the counterfactual explanations as an explanation-providing medium encompassed with user feedback to validate the levels of understanding about the explanation at each cognitive level and improvise the explanation generation methods accordingly.
翻译:由于许多人工智能(AI)的从业者和开发者被迫使这种以AI为基础的系统合理化,最近人们对人工智能(XAI)产生了极大的兴趣。十年来,大多数XAI系统是作为知识或专家系统开发的。这些系统假定了解释技术说明的理由,而很少考虑到用户的认知能力。XAI研究的重点似乎转向了一种更务实的解释方法,以便更好地理解。认知科学研究可能对XAI进步产生重大影响的一个广泛领域是评价用户知识和反馈,这对XAI系统评价至关重要。为此,我们提议一个框架,以不同认知水平的理解为基础,试验产生和评价解释。在这方面,我们采用了布鲁姆的分类学,这是评估用户认知能力的一个广泛接受的模式。我们利用反事实解释作为提供解释的媒介,其中包括用户反馈,以验证对每个认知水平的解释的理解程度,并据此调和解释产生方法。