An assurance case presents a clear and defensible argument, supported by evidence, that a system will operate as intended in a particular context. Typically, an assurance case presents an argument that a system will be acceptably safe in its intended context. One emerging proposal within the Trustworthy AI research community is to extend and apply this methodology to provide assurance that the use of an AI system or an autonomous system (AI/AS) will be acceptably ethical in a particular context. In this paper, we advance this proposal further. We do so by presenting a principles-based ethical assurance (PBEA) argument pattern for AI/AS. The PBEA argument pattern offers a framework for reasoning about the overall ethical acceptability of the use of a given AI/AS and it could be an early prototype template for specific ethical assurance cases. The four core ethical principles that form the basis of the PBEA argument pattern are: justice; beneficence; non-maleficence; and respect for personal autonomy. Throughout, we connect stages of the argument pattern to examples of AI/AS applications. This helps to show its initial plausibility.
翻译:得到信赖的大赦国际研究界正在提出的一项建议是,扩大和适用这一方法,以保证在特定情况下使用AI系统或自主系统(AI/AS)将具有可接受的道德;在本文件中,我们进一步推进这一建议;我们通过为AI/AS提供基于原则的道德保证(PBEA)理论模式这样做;PBEA理论模式为使用特定AI/AS的总体道德可接受性提供了一个理论框架,它可以作为具体道德保证案例的早期原型模板;构成PBEA理论模式基础的四项核心道德原则是:公正;宽容;非男性主义;以及尊重个人自主;我们始终将依据理论模式的各个阶段与AI/AS应用实例联系起来。