Assurance cases are structured arguments, supported by evidence, that are often used to establish confidence that a software-intensive system, such as an aeroplane, will be acceptably safe in its intended context. One emerging proposition within the ethical AI community is to extend and apply the assurance case methodology to achieve confidence that AI-enabled and autonomous systems will be acceptably ethical when used within their intended contexts. This paper substantially develops the proposition and makes it concrete. We present a framework - an ethical assurance argument pattern - to structure systematic reasoning about the ethical acceptability of using a given AI/AS in a specific context. The framework is based on four core ethical principles: justice; beneficence; non-maleficence; and respect for human autonomy. To illustrate the initial plausibility of the proposed methodology, we show how the ethical assurance argument pattern might be instantiated in practice with the example of an autonomous vehicle taxi service.
翻译:可靠案例是结构化的论据,有证据支持,常常用来建立信心,相信像飞机这样的软件密集型系统在预期情况下将具有可接受安全性。道德的大赦国际团体中出现的一个新观点是扩大和适用保证案例法,以使人们相信在其预期范围内使用AI驱动的和自主的系统时会具有可接受道德性。本文大大地发展了这一主张并使其变得具体化。我们提出了一个框架――一种道德保证论据模式――为在特定情况下使用特定AI/AS的道德可接受性进行系统推理。框架基于四项核心道德原则:正义、宽容、非男性主义以及尊重人类自主。为说明拟议方法的初始可行性,我们展示了道德保证论点模式如何在实践中与自动出租车服务的例子同时发生。