It is curious that AI increasingly outperforms human decision makers, yet much of the public distrusts AI to make decisions affecting their lives. In this paper we explore a novel theory that may explain one reason for this. We propose that public distrust of AI is a moral consequence of designing systems that prioritize reduction of costs of false positives over less tangible costs of false negatives. We show that such systems, which we characterize as 'distrustful', are more likely to miscategorize trustworthy individuals, with cascading consequences to both those individuals and the overall human-AI trust relationship. Ultimately, we argue that public distrust of AI stems from well-founded concern about the potential of being miscategorized. We propose that restoring public trust in AI will require that systems are designed to embody a stance of 'humble trust', whereby the moral costs of the misplaced distrust associated with false negatives is weighted appropriately during development and use.
翻译:令人好奇的是,大赦国际日益超越了人类决策者,但大部分公众不信任大赦国际做出影响他们生活的决定。 在本文中,我们探索了一个新的理论,可以解释原因之一。我们提议,公众不信任大赦国际是设计系统的一种道德后果,这些系统将降低假正反反反差成本放在优先位置,而不是不那么明显反差成本。我们表明,我们称之为“不可信”的这类系统更可能错误地划分可信赖的个人,给这些人和整个人类-AI信任关系造成连锁后果。归根结底,我们争辩说,公众对AI的不信任源于对被错误分类的潜在可能性的有充分理由的关切。我们提议,恢复公众对AI的信任需要设计系统来体现一种“虚伪信任”的立场,从而在开发和使用过程中适当权衡与虚假反差相关的错误不信任造成的道德代价。