Machine learning models need to provide contrastive explanations, since people often seek to understand why a puzzling prediction occurred instead of some expected outcome. Current contrastive explanations are rudimentary comparisons between examples or raw features, which remain difficult to interpret, since they lack semantic meaning. We argue that explanations must be more relatable to other concepts, hypotheticals, and associations. Inspired by the perceptual process from cognitive psychology, we propose the XAI Perceptual Processing Framework and RexNet model for relatable explainable AI with Contrastive Saliency, Counterfactual Synthetic, and Contrastive Cues explanations. We investigated the application of vocal emotion recognition, and implemented a modular multi-task deep neural network to predict and explain emotions from speech. From think-aloud and controlled studies, we found that counterfactual explanations were useful and further enhanced with semantic cues, but not saliency explanations. This work provides insights into providing and evaluating relatable contrastive explainable AI for perception applications.
翻译:机器学习模型需要提供对比性的解释,因为人们常常试图理解为什么会出现令人费解的预测而不是某些预期的结果。当前对比性的解释是实例或原始特征之间的初步比较,这些特征或原始特征仍然难以解释,因为它们缺乏语义含义。我们争辩说,解释必须比其他概念、假设和协会更加相近。在认知心理学概念过程的启发下,我们提出了XAI概念概念处理框架和RexNet模型,以相对可解释的AI,与矛盾性调益、反现实合成和反对称色彩解释。我们调查了声音情感识别的应用,并建立了一个模块化的多任务深度神经网络,以预测和解释来自言论的情感。我们从思考和受控制的研究中发现,反事实解释有用,并因语义提示而得到进一步的强化,但非明显解释。这项工作为认知应用提供和评价可比较的对比性解释提供了深刻的见解。