As machine learning approaches are increasingly used to augment human decision-making, eXplainable Artificial Intelligence (XAI) research has explored methods for communicating system behavior to humans. However, these approaches often fail to account for the emotional responses of humans as they interact with explanations. Facial affect analysis, which examines human facial expressions of emotions, is one promising lens for understanding how users engage with explanations. Therefore, in this work, we aim to (1) identify which facial affect features are pronounced when people interact with XAI interfaces, and (2) develop a multitask feature embedding for linking facial affect signals with participants' use of explanations. Our analyses and results show that the occurrence and values of facial AU1 and AU4, and Arousal are heightened when participants fail to use explanations effectively. This suggests that facial affect analysis should be incorporated into XAI to personalize explanations to individuals' interaction styles and to adapt explanations based on the difficulty of the task performed.
翻译:随着机器学习方法越来越多地用于加强人类决策,可移植人工智能(XAI)研究探索了向人类传达系统行为的方法,然而,这些方法往往不能说明人类在与解释互动时的情感反应。分析会影响人类的面部表情,是了解用户如何与解释互动的一个很有希望的透镜。因此,在这项工作中,我们的目标是:(1)当人们与XAI接口互动时,确定哪些面部影响特征是明显的;(2)开发一个多任务特征,将面部影响信号与参与者使用解释联系起来。我们的分析和结果显示,在参与者未能有效利用解释时,面部AU1和AU4的出现和价值会提高,以及Aurusal的出现和价值会提高。这表明,面部影响分析应该纳入XAI,以便个人对互动方式作出个人解释,并根据所执行的任务的困难调整解释。