This paper investigates the prospects of using directive explanations to assist people in achieving recourse of machine learning decisions. Directive explanations list which specific actions an individual needs to take to achieve their desired outcome. If a machine learning model makes a decision that is detrimental to an individual (e.g. denying a loan application), then it needs to both explain why it made that decision and also explain how the individual could obtain their desired outcome (if possible). At present, this is often done using counterfactual explanations, but such explanations generally do not tell individuals how to act. We assert that counterfactual explanations can be improved by explicitly providing people with actions they could use to achieve their desired goal. This paper makes two contributions. First, we present the results of an online study investigating people's perception of directive explanations. Second, we propose a conceptual model to generate such explanations. Our online study showed a significant preference for directive explanations ($p<0.001$). However, the participants' preferred explanation type was affected by multiple factors, such as individual preferences, social factors, and the feasibility of the directives. Our findings highlight the need for a human-centred and context-specific approach for creating directive explanations.
翻译:本文调查了使用指令解释来帮助人们利用机器学习决定的可能性; 指令解释列出了个人为实现预期结果需要采取的具体行动; 如果机器学习模式做出不利于个人的决定(例如拒绝贷款申请),那么它需要同时解释为什么作出这一决定,并解释个人如何(如果可能的话)获得预期结果; 目前,这样做往往使用反事实解释,但这种解释一般不告诉个人如何行动; 我们主张,通过明确向人们提供他们可以用来实现其预期目标的行动,反事实解释可以改进。 本文作出了两项贡献。 首先,我们介绍了一项在线研究的结果,调查人们对指令解释的看法。 其次,我们提出一个概念模型来作出这种解释。我们的在线研究显示,非常倾向于指令解释(p<0.001美元),然而,参与者偏好的解释类型受到多种因素的影响,例如个人偏好、社会因素和指令的可行性。 我们的调查结果强调,需要采取以人为本和针对具体背景的方法来作出指令解释。