Private companies, public sector organizations, and academic groups have outlined ethical values they consider important for responsible artificial intelligence technologies. While their recommendations converge on a set of central values, little is known about the values a more representative public would find important for the AI technologies they interact with and might be affected by. We conducted a survey examining how individuals perceive and prioritize responsible AI values across three groups: a representative sample of the US population (N=743), a sample of crowdworkers (N=755), and a sample of AI practitioners (N=175). Our results empirically confirm a common concern: AI practitioners' value priorities differ from those of the general public. Compared to the US-representative sample, AI practitioners appear to consider responsible AI values as less important and emphasize a different set of values. In contrast, self-identified women and black respondents found responsible AI values more important than other groups. Surprisingly, more liberal-leaning participants, rather than participants reporting experiences with discrimination, were more likely to prioritize fairness than other groups. Our findings highlight the importance of paying attention to who gets to define responsible AI.
翻译:私人公司、公共部门组织和学术团体概述了他们认为对负责任的人工智能技术十分重要的道德价值观。虽然他们的建议集中在一套核心价值观上,但对于更具代表性的公众会发现哪些价值观对他们与大赦国际技术互动并可能受到其影响的重要性知之甚少。我们进行了一项调查,审查个人如何看待并优先考虑三个群体负责的AI价值观:美国人口的代表抽样(N=743)、人群工人抽样(N=755)和大赦国际从业人员抽样(N=175),我们的经验证实了一个共同的关切:大赦国际从业人员的价值优先事项不同于一般公众。与美国代表抽样相比,大赦国际从业人员似乎认为负责任的AI价值观不太重要,强调不同的价值观。相比之下,自我认同的妇女和黑人受访者认为负责的AI价值观比其他群体更重要。令人惊讶的是,更为自由的参与者而不是报告歧视经历的参与者更可能优先考虑公平性。我们的调查结果强调了注意谁能够确定负责任的AI的重要性。