Machine learning algorithms are increasingly used to assist human decision-making. When the goal of machine assistance is to improve the accuracy of human decisions, it might seem appealing to design ML algorithms that complement human knowledge. While neither the algorithm nor the human are perfectly accurate, one could expect that their complementary expertise might lead to improved outcomes. In this study, we demonstrate that in practice decision aids that are not complementary, but make errors similar to human ones may have their own benefits. In a series of human-subject experiments with a total of 901 participants, we study how the similarity of human and machine errors influences human perceptions of and interactions with algorithmic decision aids. We find that (i) people perceive more similar decision aids as more useful, accurate, and predictable, and that (ii) people are more likely to take opposing advice from more similar decision aids, while (iii) decision aids that are less similar to humans have more opportunities to provide opposing advice, resulting in a higher influence on people's decisions overall.
翻译:机器学习算法越来越多地被用于协助人类决策。当机器援助的目的是提高人类决策的准确性时,设计能补充人类知识的ML算法似乎很吸引人。虽然算法和人类的算法都不完全准确,但人们可以预期其互补的专门知识可能会带来更好的结果。在这项研究中,我们证明在实践中,不是互补的、但产生与人类相似错误的决策辅助法可能具有其自身的好处。在总共901名参与者的一系列人体实验中,我们研究人类和机器错误的相似性如何影响人类对算法决策辅助法的认识和相互作用。我们发现:(一)人们认为更相似的决策辅助法更有用、准确和可预测,而且(二)人们更有可能从更相似的决策辅助法中接受相反的建议,而(三)与人类不太相似的决策辅助法则有更多机会提供相反的建议,从而对总体人民决策产生更大的影响。