Departing from the claim that AI needs to be trustworthy, we find that ethical advice from an AI-powered algorithm is trusted even when its users know nothing about its training data and when they learn information about it that warrants distrust. We conducted online experiments where the subjects took the role of decision-makers who received advice from an algorithm on how to deal with an ethical dilemma. We manipulated the information about the algorithm and studied its influence. Our findings suggest that AI is overtrusted rather than distrusted. We suggest digital literacy as a potential remedy to ensure the responsible use of AI.
翻译:我们从大赦国际需要可信赖的说法中发现,即使其用户对其培训数据一无所知,而且当他们了解有关培训数据的信息而值得不信任时,也相信AI动力算法提供的伦理建议。 我们进行了在线实验,让研究对象扮演了决策者的角色,他们从算法中获得了如何处理伦理难题的建议。我们操纵了关于算法的信息并研究了其影响。我们的研究结果表明,AI受到过度信任而不是不信任。我们建议数字扫盲是确保负责任地使用AI的潜在补救办法。