Systems aiming to aid consumers in their decision-making (e.g., by implementing persuasive techniques) are more likely to be effective when consumers trust them. However, recent research has demonstrated that the machine learning algorithms that often underlie such technology can act unfairly towards specific groups (e.g., by making more favorable predictions for men than for women). An undesired disparate impact resulting from this kind of algorithmic unfairness could diminish consumer trust and thereby undermine the purpose of the system. We studied this effect by conducting a between-subjects user study investigating how (gender-related) disparate impact affected consumer trust in an app designed to improve consumers' financial decision-making. Our results show that disparate impact decreased consumers' trust in the system and made them less likely to use it. Moreover, we find that trust was affected to the same degree across consumer groups (i.e., advantaged and disadvantaged users) despite both of these consumer groups recognizing their respective levels of personal benefit. Our findings highlight the importance of fairness in consumer-oriented artificial intelligence systems.
翻译:旨在帮助消费者决策的系统(例如,采用有说服力的技术)在消费者信任时更有可能产生效力;然而,最近的研究表明,这种技术往往所依据的机器学习算法对特定群体不公平(例如,对男子比妇女更有利的预测),这种算法不公平造成的不理想的不同影响会削弱消费者的信任,从而破坏系统的目的;我们研究了这种影响,对各学科用户进行了一项研究,调查(与性别有关的)不同影响如何影响到消费者对旨在改进消费者金融决策的应用程序的信任;我们的结果显示,不同的影响降低了消费者对系统的信任,使他们较少使用该系统的可能性;此外,我们发现,尽管这两个消费者群体认识到各自的个人利益水平,但信任仍然受到同样程度的影响(即,有利和不利用户)。我们的调查结果强调了面向消费者的人工情报系统公平的重要性。