The boundaries of existing explainable artificial intelligence (XAI) algorithms are confined to problems grounded in technical users' demand for explainability. This research paradigm disproportionately ignores the larger group of non-technical end users, who have a much higher demand for AI explanations in diverse explanation goals, such as making safer and better decisions and improving users' predicted outcomes. Lacking explainability-focused functional support for end users may hinder the safe and accountable use of AI in high-stakes domains, such as healthcare, criminal justice, finance, and autonomous driving systems. Built upon prior human factor analysis on end users' requirements for XAI, we identify and model four novel XAI technical problems covering the full spectrum from design to the evaluation of XAI algorithms, including edge-case-based reasoning, customizable counterfactual explanation, collapsible decision tree, and the verifiability metric to evaluate XAI utility. Based on these newly-identified research problems, we also discuss open problems in the technical development of user-centered XAI to inspire future research. Our work bridges human-centered XAI with the technical XAI community, and calls for a new research paradigm on the technical development of user-centered XAI for the responsible use of AI in critical tasks.
翻译:现有可解释的人工智能(XAI)算法的界限限于技术用户对解释性的需求所限定的问题。这一研究范式不适当地忽视了非技术终端用户这一较大的非技术终端用户群体,他们对于AI解释的不同解释目标,如作出更安全和更好的决定以及改善用户的预测结果,对AI的解释性解释性要求要高得多。对终端用户缺乏以解释性为重点的功能支持,可能妨碍在保健、刑事司法、金融和自主驱动系统等高接触领域安全、负责任地使用AI。在以前人类因素对终端用户对XAI的要求进行分析的基础上,我们查明并模拟了四大新的XAI技术问题,从设计到评价XAI算法,包括边际推理、可定制反事实解释、可拼接的决定树以及评估XAI实用性的可核查性衡量标准。基于这些新确定的研究问题,我们还讨论了在用户中心 XAI技术开发以激发未来研究的公开问题。我们的工作将XAI技术界与人为中心,将XAI技术界连接起来,并呼吁在XAI技术开发用户关键发展方面进行新的研究模式。