Persuasion modeling is a key building block for conversational agents. Existing works in this direction are limited to analyzing textual dialogue corpus. We argue that visual signals also play an important role in understanding human persuasive behaviors. In this paper, we introduce the first multimodal dataset for modeling persuasion behaviors. Our dataset includes 199 dialogue transcriptions and videos captured in a multi-player social deduction game setting, 26,647 utterance level annotations of persuasion strategy, and game level annotations of deduction game outcomes. We provide extensive experiments to show how dialogue context and visual signals benefit persuasion strategy prediction. We also explore the generalization ability of language models for persuasion modeling and the role of persuasion strategies in predicting social deduction game outcomes. Our dataset, code, and models can be found at https://persuasion-deductiongame.socialai-data.org.
翻译:劝导模型是对话代理人的关键基石。 这方面的现有工作仅限于分析文本对话材料。 我们主张视觉信号在理解人类说服行为方面也发挥着重要作用。 在本文中,我们引入了第一个模拟劝说行为的多式数据集。 我们的数据集包括199个对话解说和视频,在多人社会扣减游戏环境中捕捉到,26 647个说服策略的发音级说明,以及游戏级推算游戏结果说明。 我们提供广泛的实验,以显示对话背景和视觉信号对说服战略预测有何益处。 我们还探索语言模型的通用能力以及说服策略在预测社会扣减游戏结果中的作用。我们的数据集、代码和模型可以在https://persuasion-deductiongame. socialai-data.org上找到。