Decisions in organizations are about evaluating alternatives and choosing the one that would best serve organizational goals. To the extent that the evaluation of alternatives could be formulated as a predictive task with appropriate metrics, machine learning algorithms are increasingly being used to improve the efficiency of the process. Explanations help to facilitate communication between the algorithm and the human decision-maker, making it easier for the latter to interpret and make decisions on the basis of predictions by the former. Feature-based explanations' semantics of causal models, however, induce leakage from the decision-maker's prior beliefs. Our findings from a field experiment demonstrate empirically how this leads to confirmation bias and disparate impact on the decision-maker's confidence in the predictions. Such differences can lead to sub-optimal and biased decision outcomes.
翻译:各组织的决定涉及评价替代办法和选择最有利于组织目标的备选办法。如果对替代办法的评价可以作为一种具有适当指标的预测性任务来拟订,则越来越多地利用机器学习算法来提高过程的效率。解释有助于促进算法与人类决策者之间的交流,使后者更容易根据前者的预测来解释和作出决定。但基于特性的解释因果模型的语义却会从决策者的先前信仰中渗漏。我们从实地实验中得出的研究结果从经验上表明,这如何导致确认偏差,对决策者对预测的信心产生不同的影响。这种差异可能导致次优和偏差的决定结果。