As AI systems are integrated into high stakes social domains, researchers now examine how to design and operate them in a safe and ethical manner. However, the criteria for identifying and diagnosing safety risks in complex social contexts remain unclear and contested. In this paper, we examine the vagueness in debates about the safety and ethical behavior of AI systems. We show how this vagueness cannot be resolved through mathematical formalism alone, instead requiring deliberation about the politics of development as well as the context of deployment. Drawing from a new sociotechnical lexicon, we redefine vagueness in terms of distinct design challenges at key stages in AI system development. The resulting framework of Hard Choices in Artificial Intelligence (HCAI) empowers developers by 1) identifying points of overlap between design decisions and major sociotechnical challenges; 2) motivating the creation of stakeholder feedback channels so that safety issues can be exhaustively addressed. As such, HCAI contributes to a timely debate about the status of AI development in democratic societies, arguing that deliberation should be the goal of AI Safety, not just the procedure by which it is ensured.
翻译:由于大赦国际系统被纳入社会大利益领域,研究人员现在研究如何以安全和合乎道德的方式设计和操作这些系统。然而,在复杂的社会环境中查明和判断安全风险的标准仍然不明确,而且有争议。在本文件中,我们研究了关于大赦国际系统安全和道德行为的辩论模糊不清的问题。我们表明,这种模糊不清如何不能仅仅通过数学形式主义来解决,而需要审议发展的政治以及部署的背景。根据新的社会技术词汇,我们重新定义了大赦国际系统发展关键阶段不同设计挑战的模糊性。由此产生的人工智能硬性选择框架授权开发者,1)查明设计决定与重大社会技术挑战之间的重叠点;2)鼓励建立利益攸关方反馈渠道,以便全面解决安全问题。因此,人类学会为及时辩论学会在民主社会中的发展状况作出了贡献,认为审议应该是大赦国际安全的目标,而不仅仅是确保它得以实现的程序。