In the development of governmental policy for artificial intelligence (AI) that is informed by ethics, one avenue currently pursued is that of drawing on AI Ethics Principles. However, these AI Ethics Principles often fail to be actioned in governmental policy. This paper proposes a novel framework for the development of Actionable Principles for AI. The approach acknowledges the relevance of AI Ethics Principles and homes in on methodological elements to increase their practical implementability in policy processes. As a case study, elements are extracted from the development process of the Ethics Guidelines for Trustworthy AI of the European Commissions High Level Expert Group on AI. Subsequently, these elements are expanded on and evaluated in light of their ability to contribute to a prototype framework for the development of Actionable Principles for AI. The paper proposes the following three propositions for the formation of such a prototype framework: (1) preliminary landscape assessments; (2) multi-stakeholder participation and cross-sectoral feedback; and, (3) mechanisms to support implementation and operationalizability.
翻译:在制订以道德为根据的政府人工智能政策时,目前所追求的一种途径是借鉴大赦国际的道德原则,然而,在政府政策中往往未能落实大赦国际的道德原则。本文件提出了制定大赦国际可行动原则的新框架。本方法承认大赦国际的道德原则和家庭在方法要素方面的相关性,以提高其在政策进程中的实际可执行性。作为案例研究,从欧洲联盟委员会高级别专家组的《可信赖的大赦国际道德准则》的发展进程中提取了一些要素。随后,根据这些要素对制定大赦国际可行动原则原型框架的贡献能力,对这些要素进行了扩展和评价。本文件提出了建立这样一个原型框架的以下三个建议:(1) 初步地貌评估;(2) 多利益攸关方参与和跨部门反馈;(3) 支持执行和可操作性的机制。