We are moving towards a future where Artificial Intelligence (AI) based agents make many decisions on behalf of humans. From healthcare decision making to social media censoring, these agents face problems, and make decisions with ethical and societal implications. Ethical behaviour is a critical characteristic that we would like in a human-centric AI. A common observation in human-centric industries, like the service industry and healthcare, is that their professionals tend to break rules, if necessary, for pro-social reasons. This behaviour among humans is defined as pro-social rule breaking. To make AI agents more human centric, we argue that there is a need for a mechanism that helps AI agents identify when to break rules set by their designers. To understand when AI agents need to break rules, we examine the conditions under which humans break rules for pro-social reasons. In this paper, we present a study that introduces a 'vaccination strategy dilemma' to human participants and analyses their responses. In this dilemma, one needs to decide whether they would distribute Covid-19 vaccines only to members of a high-risk group (follow the enforced rule) or, in selected cases, administer the vaccine to a few social influencers (break the rule), which might yield an overall greater benefit to society. The results of the empirical study suggest a relationship between stakeholder utilities and pro-social rule breaking (PSRB), which neither deontological nor utilitarian ethics completely explain. Finally, the paper discusses the design characteristics of an ethical agent capable of PSRB and the future research directions on PSRB in the AI realm. We hope that this will inform the design of future AI agents, and their decision-making behaviour.
翻译:我们正朝着未来发展,让基于人工智能(AI)的代理人代表人类做出许多决定。从保健决策到社会媒体审查,这些代理人面临问题,并做出具有道德和社会影响的决定。道德行为是我们喜欢的以人为中心的人工智能中的一个关键特征。在以人为中心的行业,如服务业和保健行业,一个共同的观察是,他们的专业人员出于亲社会的原因,在必要时会打破规则。这种人类的行为被定义为亲社会规则的破碎。要使AI代理人更加以人为中心的,我们就认为需要有一种机制,帮助AI代理人确定何时打破其设计者制定的规则。理解在AI代理人需要打破规则时,我们研究人类为亲社会的原因打破规则的条件。在本文中,我们提出一项研究,“让人类参与者陷入冒险战略困境”并分析他们的反应。在这个困境中,人们需要决定他们是否只向高风险团体的成员分发Covid-19疫苗(遵循强制规则),或者在选定的案例中,需要帮助AI代理人确定何时打破其设计规则。我们要了解什么是道德规则,或者在道德规则中,管理疫苗最终会显示一个社会-生物伦理学研究的结果。