Algorithmic processes are increasingly employed to perform managerial decision making, especially after the tremendous success in Artificial Intelligence (AI). This paradigm shift is occurring because these sophisticated AI techniques are guaranteeing the optimality of performance metrics. However, this adoption is currently under scrutiny due to various concerns such as fairness, and how does the fairness of an AI algorithm affects user's trust is much legitimate to pursue. In this regard, we aim to understand the relationship between induced algorithmic fairness and its perception in humans. In particular, we are interested in whether these two are positively correlated and reflect substantive fairness. Furthermore, we also study how does induced algorithmic fairness affects user trust in algorithmic decision making. To understand this, we perform a user study to simulate candidate shortlisting by introduced (manipulating mathematical) fairness in a human resource recruitment setting. Our experimental results demonstrate that different levels of introduced fairness are positively related to human perception of fairness, and simultaneously it is also positively related to user trust in algorithmic decision making. Interestingly, we also found that users are more sensitive to the higher levels of introduced fairness than the lower levels of introduced fairness. Besides, we summarize the theoretical and practical implications of this research with a discussion on perception of fairness.
翻译:特别是在人工智能(AI)取得巨大成功之后,人们越来越多地采用算法过程来进行管理决策,尤其是在人工智能(AI)取得巨大成功之后。这种范式转变之所以发生,是因为这些先进的人工智能技术正在保证业绩衡量的最佳性。然而,由于各种关切,例如公平性,以及AI算法的公平性如何影响用户信任度,因此目前正在对这一采用进行审查。在这方面,我们的目标是了解诱导算法公平及其对人类的认识之间的关系。特别是,我们感兴趣的是,这两个过程是否具有积极的关联性,是否反映了实质性公平性。此外,我们还研究引导算法公平性如何影响用户对算法决策的信任。为了理解这一点,我们进行了用户研究,模拟候选人最后名单,在人力资源招聘过程中引入(操纵数学)公平性。我们的实验结果表明,引入公平性的不同程度与人类对公平性的认识有积极关系,同时,也与用户对算法决策的信任有积极关系。有趣的是,我们还发现,用户对引入的公平性水平比引入的公平性较低的程度更敏感。此外,我们还发现,我们用这一研究的理论和实践上关于公平性的影响,我们用这一研究的理论和实践上对公平性进行了总结。