This article offers several contributions to the interdisciplinary project of responsible research and innovation in data science and AI. First, it provides a critical analysis of current efforts to establish practical mechanisms for algorithmic assessment, which are used to operationalise normative principles, such as sustainability, accountability, transparency, fairness, and explainability, in order to identify limitations and gaps with the current approaches. Second, it provides an accessible introduction to the methodology of argument-based assurance, and explores how it is currently being applied in the development of safety cases for autonomous and intelligent systems. Third, it generalises this method to incorporate wider ethical, social, and legal considerations, in turn establishing a novel version of argument-based assurance that we call 'ethical assurance'. Ethical assurance is presented as a structured means for unifying the myriad practical mechanisms that have been proposed, as it is built upon a process-based form of project governance that supports inclusive and participatory ethical deliberation while also remaining grounded in social and technical realities. Finally, it sets an agenda for ethical assurance, by detailing current challenges, open questions, and next steps, which serve as a springboard to build an active (and interdisciplinary) research programme as well as contribute to ongoing discussions in policy and governance.
翻译:本条为数据科学和AI领域负责任的研究和创新的跨学科项目提供了若干贡献。第一,对目前为建立算法评估实际机制而作的努力进行批判性分析,这种机制用于实施可持续性、问责制、透明度、公平性和可解释性等规范性原则,以便查明现有方法的局限性和差距。第二,它为基于论据的保证方法提供了方便的介绍,并探讨了目前在为自主和智能系统制定安全案例时如何应用这种方法。第三,它概括了这一方法,将更广泛的道德、社会和法律考虑纳入其中,反过来,它又建立了我们称之为“道德保证”的基于论据的保证的新版本。道德保证是统一所提出的各种实际机制的结构性手段,因为它建立在基于程序的项目管理形式上,支持包容性和参与性的道德审议,同时仍然以社会和技术现实为基础。最后,它为道德保障制定了议程,详细介绍了当前的挑战、开放问题和下一步步骤,作为建立积极(和跨学科)研究方案的跳板,并为正在进行的政策治理和讨论作出贡献。