The widespread adoption of algorithmic decision-making systems has brought about the necessity to interpret the reasoning behind these decisions. The majority of these systems are complex black box models, and auxiliary models are often used to approximate and then explain their behavior. However, recent research suggests that such explanations are not overly accessible to lay users with no specific expertise in machine learning and this can lead to an incorrect interpretation of the underlying model. In this paper, we show that a predictive and interactive model based on causality is inherently interpretable, does not require any auxiliary model, and allows both expert and non-expert users to understand the model comprehensively. To demonstrate our method we developed Outcome Explorer, a causality guided interactive interface, and evaluated it by conducting think-aloud sessions with three expert users and a user study with 18 non-expert users. All three expert users found our tool to be comprehensive in supporting their explanation needs while the non-expert users were able to understand the inner workings of a model easily.
翻译:广泛采用算法决策系统使得有必要解释这些决定背后的推理,这些系统大多是复杂的黑盒模型,辅助模型常常用来估计和解释其行为。然而,最近的研究表明,对于在机器学习方面没有具体专门知识的普通用户来说,这种解释并非过分容易获得,这可能导致对基本模型的错误解释。在本文中,我们表明,基于因果关系的预测和互动模型本质上是可以解释的,不需要任何辅助模型,让专家和非专家用户都能全面理解模型。为了展示我们的方法,我们开发了“结果探索者”,一种以因果关系为指南的交互界面,并通过与三个专家用户进行“思考”会议和与18个非专家用户进行“用户”的用户研究对其进行评估。所有3个专家用户都认为,我们的工具非常全面,可以支持他们的解释需要,而非专家用户则能够很容易地理解模型的内部运作方式。