Automated decision systems (ADS) have become ubiquitous in many high-stakes domains. Those systems typically involve sophisticated yet opaque artificial intelligence (AI) techniques that seldom allow for full comprehension of their inner workings, particularly for affected individuals. As a result, ADS are prone to deficient oversight and calibration, which can lead to undesirable (e.g., unfair) outcomes. In this work, we conduct an online study with 200 participants to examine people's perceptions of fairness and trustworthiness towards ADS in comparison to a scenario where a human instead of an ADS makes a high-stakes decision -- and we provide thorough identical explanations regarding decisions in both cases. Surprisingly, we find that people perceive ADS as fairer than human decision-makers. Our analyses also suggest that people's AI literacy affects their perceptions, indicating that people with higher AI literacy favor ADS more strongly over human decision-makers, whereas low-AI-literacy people exhibit no significant differences in their perceptions.
翻译:自动决策系统(ADS)在许多高层领域已变得无处不在。这些系统通常涉及复杂而不透明的人工智能(AI)技术,这些技术很少能够充分理解其内部工作,特别是对受影响个人而言。因此,ADS容易受到不充分的监督和校准,这可能导致不可取(例如不公平)的结果。在这项工作中,我们与200名参与者进行了在线研究,以审查人们对于ADS的公平和可信赖性的看法,与一个人而不是ADS作出高度决策的情景相比,我们提供了对这两种情况决定的完全相同的解释。令人惊讶的是,我们发现人们认为ADS比人类决策者更公平。我们的分析还表明,人们的AI识字能力会影响他们的看法,表明拥有较高AI识字水平的人更倾向于ADS而不是人类决策者,而低AI识字者在认识方面没有明显差异。