Morality in dialogue systems has raised great attention in research recently. A moral dialogue system could better connect users and enhance conversation engagement by gaining users' trust. In this paper, we propose a framework, MoralDial to train and evaluate moral dialogue systems. In our framework, we first explore the communication mechanisms of morality and resolve expressed morality into four sub-modules. The sub-modules indicate the roadmap for building a moral dialogue system. Based on that, we design a simple yet effective method: constructing moral discussions from Rules of Thumb (RoTs) between simulated specific users and the dialogue system. The constructed discussion consists of expressing, explaining, and revising the moral views in dialogue exchanges, which makes conversational models learn morality well in a natural manner. Furthermore, we propose a novel evaluation method in the framework. We evaluate the multiple aspects of morality by judging the relation between dialogue responses and RoTs in discussions, where the multifaceted nature of morality is particularly considered. Automatic and manual experiments demonstrate that our framework is promising to train and evaluate moral dialogue systems.
翻译:最近,对话体系的道德道德感引起了研究的极大关注。道德对话体系可以更好地连接用户,并通过获得用户的信任加强对话互动。在本文件中,我们提出了一个框架,即道德道德,以培训和评价道德对话体系。在我们的框架内,我们首先探索道德和决心的交流机制,将其分为四个子模块。次级模块显示了建立道德对话体系的路线图。在此基础上,我们设计了一个简单而有效的方法:在模拟特定用户和对话体系之间建立从“Thumb”规则(ROTs)进行道德讨论。构建的讨论包括表达、解释和修改对话交流中的道德观点,使对话模式以自然的方式学习道德。此外,我们提出了在框架中的新的评价方法。我们通过判断对话反应与讨论中的“ROTs”之间的关系,在讨论中特别考虑到道德的多方面性质。自动和人工实验表明,我们的框架有望培训和评估道德对话体系。