Within the current AI ethics discourse, there is a gap in empirical research on understanding how AI practitioners understand ethics and socially organize to operationalize ethical concerns, particularly in the context of AI start-ups. This gap intensifies the risk of a disconnect between scholarly research, innovation, and application. This risk materializes acutely as mounting pressures to identify and mitigate the potential harms of AI systems have created an urgent need to assess and implement socio-technical innovation for fairness, accountability, and transparency. Building on social practice theory, we address this need via a framework that allows AI researchers, practitioners, and regulators to systematically analyze existing cultural understandings, histories, and social practices of ethical AI to define appropriate strategies for effectively implementing socio-technical innovations. Our contributions are threefold: 1) we introduce a practice-based approach for understanding ethical AI; 2) we present empirical findings from our study on the operationalization of ethics in German AI start-ups to underline that AI ethics and social practices must be understood in their specific cultural and historical contexts; and 3) based on our empirical findings, we suggest that ethical AI practices can be broken down into principles, needs, narratives, materializations, and cultural genealogies to form a useful backdrop for considering socio-technical innovations.
翻译:在目前大赦国际的伦理学讨论中,关于了解大赦国际从业者如何理解伦理学和社会组织如何组织起来以落实伦理学关切的经验性研究存在差距,特别是在大赦国际开办阶段,这一差距加大了学术研究、创新和应用之间脱节的风险,这一风险之所以严重,是因为查明和减轻大赦国际系统潜在危害的压力越来越大,因此迫切需要评估和实施社会-技术创新,以促进公平、问责和透明度。基于社会实践理论,我们通过一个框架来解决这一需要,该框架允许大赦国际研究人员、从业者和管理人员系统分析现有的文化理解、历史和社会做法,以确定有效执行社会-技术创新的适当战略。我们的贡献有三重:(1) 我们采用了基于实践的方法来理解伦理学的大赦国际;(2) 我们介绍了关于德国从业初创阶段开始的伦理学操作性研究中得出的经验性结论,以强调,必须从具体的文化和历史背景中理解大赦国际的道德和社会做法;(3) 根据我们的经验性调查结果,我们建议,伦理学学会的做法可以细分为原则、需要、叙述、材料化和文化基因学,以一个有用的背景来考虑社会创新的有用背景。