In many applications of AI, the algorithm's output is framed as a suggestion to a human user. The user may ignore the advice or take it into consideration to modify his/her decisions. With the increasing prevalence of such human-AI interactions, it is important to understand how users act (or do not act) upon AI advice, and how users regard advice differently if they believe the advice come from an "AI" versus another human. In this paper, we characterize how humans use AI suggestions relative to equivalent suggestions from a group of peer humans across several experimental settings. We find that participants' beliefs about the human versus AI performance on a given task affects whether or not they heed the advice. When participants decide to use the advice, they do so similarly for human and AI suggestions. These results provide insights into factors that affect human-AI interactions.
翻译:在大赦国际的许多应用中,算法产出是作为向人类用户提出的建议而设计的。用户可以忽略建议,或考虑其修改决定。随着这种人类-大赦国际互动日益普遍,必须了解用户如何根据大赦国际的建议采取行动(或不采取行动),如果用户认为建议来自“AI”与另一个人,则如何不同地看待建议。在本文中,我们说明人类如何使用与多个实验环境中的同龄人群体提出的等同建议相对应的建议。我们发现,参与者对某一任务中人的信仰与AI的表现影响他们是否听从建议。当参与者决定使用建议时,他们对人的建议和AI的建议也采取类似的做法。这些结果为影响人类-大赦国际互动的因素提供了深刻的见解。