We present a novel multi-modal chitchat dialogue dataset-TikTalk aimed at facilitating the research of intelligent chatbots. It consists of the videos and corresponding dialogues users generate on video social applications. In contrast to existing multi-modal dialogue datasets, we construct dialogue corpora based on video comment-reply pairs, which is more similar to chitchat in real-world dialogue scenarios. Our dialogue context includes three modalities: text, vision, and audio. Compared with previous image-based dialogue datasets, the richer sources of context in TikTalk lead to a greater diversity of conversations. TikTalk contains over 38K videos and 367K dialogues. Data analysis shows that responses in TikTalk are in correlation with various contexts and external knowledge. It poses a great challenge for the deep understanding of multi-modal information and the generation of responses. We evaluate several baselines on three types of automatic metrics and conduct case studies. Experimental results demonstrate that there is still a large room for future improvement on TikTalk. Our dataset is available at \url{https://github.com/RUC-AIMind/TikTalk}.
翻译:我们展示了一个新的多式聊天对话数据元件-TikTalk,目的是促进智能聊天室的研究,其中包括视频和相应的对话用户在视频社会应用程序中生成的视频和相应的对话用户。与现有的多式对话数据集相比,我们以视频评论-复式对配构建对话团,这在现实世界对话情景中更类似。我们的对话背景包括三种模式:文字、视觉和音频。与以往的基于图像的对话数据集相比,TikTalk更丰富的背景来源导致对话更加多样化。TikTalk包含超过38K视频和367K对话。数据分析显示,TikTalk的应对措施与各种背景和外部知识相关,对深入理解多式信息和生成答复构成巨大挑战。我们评估了三种自动计量和案例研究的几条基线。实验结果显示,TikTalk上仍有很大的改进空间。我们的数据集可在urlas {g_github.TARIK/RUC-ARIK}提供URll/Tas/TARIK.