Next generation task-oriented dialog systems need to understand conversational contexts with their perceived surroundings, to effectively help users in the real-world multimodal environment. Existing task-oriented dialog datasets aimed towards virtual assistance fall short and do not situate the dialog in the user's multimodal context. To overcome, we present a new dataset for Situated and Interactive Multimodal Conversations, SIMMC 2.0, which includes 11K task-oriented user<->assistant dialogs (117K utterances) in the shopping domain, grounded in immersive and photo-realistic scenes. The dialogs are collected using a two-phase pipeline: (1) A novel multimodal dialog simulator generates simulated dialog flows, with an emphasis on diversity and richness of interactions, (2) Manual paraphrasing of the generated utterances to collect diverse referring expressions. We provide an in-depth analysis of the collected dataset, and describe in detail the four main benchmark tasks we propose. Our baseline model, powered by the state-of-the-art language model, shows promising results, and highlights new challenges and directions for the community to study.
翻译:下一代面向任务的对话系统需要了解与其所认识的环境有关的对话背景,以便有效地帮助现实世界多式联运环境中的用户。现有的面向任务的对话数据集旨在提供虚拟援助,但数量不足,无法将对话置于用户的多式联运环境中。要克服,我们为定位和互动多式对话提供一套新的数据集,SIMMC 2.0,其中包括购物领域的11K任务导向用户 < > 协助对话(117K发音),以闪烁和照片现实的场景为基础。对话收集工作采用两个阶段的管道:(1) 新型多式联运对话模拟器生成模拟对话流,强调互动的多样性和丰富性;(2) 将生成的语音进行人工翻转,以收集多种参考表达。我们对所收集的数据集进行深入分析,并详细描述我们提出的四项主要基准任务。我们以最先进的语言模型为动力的基线模型,展示了有希望的结果,并突出社区研究的新挑战和方向。