Language-capable robots hold unique persuasive power over humans, and thus can help regulate people's behavior and preserve a better moral ecosystem, by rejecting unethical commands and calling out norm violations. However, miscalibrated norm violation responses (when the harshness of a response does not match the actual norm violation severity) may not only decrease the effectiveness of human-robot communication, but may also damage the rapport between humans and robots. Therefore, when robots respond to norm violations, it is crucial that they consider both the moral value of their response (by considering how much positive moral influence their response could exert) and the social value (by considering how much face threat might be imposed by their utterance). In this paper, we present a simple (naive) mathematical model of proportionality which could explain how moral and social considerations should be balanced in multi-agent norm violation response generation. But even more importantly, we use this model to start a discussion about the hidden complexity of modeling proportionality, and use this discussion to identify key research directions that must be explored in order to develop socially and morally competent language-capable robots.
翻译:语言能力强的机器人对人类拥有独特的说服力,因此他们可以通过拒绝不道德的命令和谴责违反规范的行为来帮助调节人们的行为并保护更好的道德生态系统。然而,错误的规范违规反应(当反应的严厉性与实际的违反规范的严重程度不相称时)不仅可能降低人类机器人交流的效力,而且可能损害人类与机器人之间的和谐。 因此,当机器人对违反规范的行为作出反应时,他们必须考虑其反应的道德价值(通过考虑其反应能够产生多大的积极道德影响)和社会价值(通过考虑其言论可能带来多大程度的面对威胁 ) 。 在本文中,我们提出了一个简单(虚拟)的数学比例模型,可以解释在多剂规范违规反应生成过程中如何平衡道德和社会因素。 但更重要的是,我们用这个模型开始讨论模型来讨论模型的隐蔽复杂性,并使用这一讨论来确定必须探索的关键研究方向,以便开发在社会和道德上都具备语言能力的机器人。