The world is heading towards a state in which Artificial Intelligence (AI) based agents make most decisions on behalf of humans. From healthcare decision making to social media censoring, these agents face problems and make decisions that have ethical and societal implications. Hence, ethical behaviour is a critical characteristic of a human-centric AI. A common observation in human-centric industries, like the service industry and healthcare, is that their professionals tend to break rules, if necessary, for pro-social reasons. To make AI agents more human-centric, we argue that there is a need for a mechanism that helps AI agents to identify when and how to break rules set by their designers. In this paper, we examine the when, i.e., conditions under which humans break rules for pro-social reasons. In the presented study, we introduce a 'vaccination strategy dilemma' where one needs to decide whether they would distribute Covid-19 vaccines only to members of a high-risk group (follow the rule) or, in selected cases, administer the vaccine to a few social influencers (break the rule), which might yield an overall greater benefit to society. Results of the empirical study suggest a relationship between stakeholder utilities and pro-social rule breaking (PSRB), which either deontological or utilitarian ethics cannot completely explain. Finally, the paper discusses the design characteristics of an ethical agent capable of PSRB and the future research directions on PSRB in the AI realm.
翻译:世界正在走向一个以人工智能(AI)为基础的代理人代表人类做出大多数决定的国家。从保健决策到社会媒体审查,这些代理人面临问题并作出具有道德和社会影响的决定。因此,道德行为是以人为中心的AI的关键特征。在以人为中心的行业,如服务业和保健,一个共同的观察是,他们的专业人员往往出于亲社会的原因,在必要时违反规则。为了使AI代理人更加以人为本,我们争辩说,需要有一种机制,帮助大赦国际代理人确定何时以及如何打破其设计者制定的规则。在本文中,我们研究了人类为亲社会原因打破规则的条件。在提交的研究报告中,我们引入了一种“消除战略两难点”:即他们需要决定是否只将Covid-19疫苗分发给高风险群体的成员(遵守规则),或者在特定情况下,将疫苗管理给少数社会影响者(违反规则),这可能给社会带来更大的整体利益。在实际研究中,PPB公司或PB公司的风险伦理学研究中,无法彻底解释公司道德学标准设计模式的走向。