Responding with multi-modal content has been recognized as an essential capability for an intelligent conversational agent. In this paper, we introduce the MMDialog dataset to better facilitate multi-modal conversation. MMDialog is composed of a curated set of 1.08 million real-world dialogues with 1.53 million unique images across 4,184 topics. MMDialog has two main and unique advantages. First, it is the largest multi-modal conversation dataset by the number of dialogues by 88x. Second, it contains massive topics to generalize the open-domain. To build engaging dialogue system with this dataset, we propose and normalize two response producing tasks based on retrieval and generative scenarios. In addition, we build two baselines for above tasks with state-of-the-art techniques and report their experimental performance. We also propose a novel evaluation metric MM-Relevance to measure the multi-modal responses. Our dataset and scripts are available in https://github.com/victorsungo/MMDialog.
翻译:在本文中,我们引入MMMIDIALOG数据集,以更好地促进多式对话。MMDIAlog由一套价值108万次的虚拟对话组成,覆盖了4 184个专题的153万个独特图像。MMDIAlog有两个独特的主要优势。第一,它是88x对话数量中最大的多式对话数据集。第二,它包含大量专题,以普及开放式对话。为建立与该数据集的接触对话系统,我们建议并规范两个基于检索和基因化情景的响应。此外,我们为以上与最新技术有关的任务建立两个基线,并报告其实验性表现。我们还提出一个新的评价指标MMM-相关性,以衡量多式反应。我们的数据集和脚本可在https://github.com/victorsungo/MMDDIalog查阅。