Most popular goal-oriented dialogue agents are capable of understanding the conversational context. However, with the surge of virtual assistants with screen, the next generation of agents are required to also understand screen context in order to provide a proper interactive experience, and better understand users' goals. In this paper, we propose a novel multimodal conversational framework, where the dialogue agent's next action and their arguments are derived jointly conditioned both on the conversational and the visual context. Specifically, we propose a new model, that can reason over the visual context within a conversation and populate API arguments with visual entities given the user query. Our model can recognize visual features such as color and shape as well as the metadata based features such as price or star rating associated with a visual entity. In order to train our model, due to a lack of suitable multimodal conversational datasets, we also propose a novel multimodal dialog simulator to generate synthetic data and also collect realistic user data from MTurk to improve model robustness. The proposed model achieves a reasonable 85% model accuracy, without high inference latency. We also demonstrate the proposed approach in a prototypical furniture shopping experience for a multimodal virtual assistant.
翻译:最受欢迎的目标导向对话工具能够理解对话背景。 但是,随着虚拟助手屏幕的激增,下一代代理也需要理解屏幕背景,以便提供适当的互动经验,并更好地了解用户的目标。在本文件中,我们提议了一个新的多式对话框架,在这个框架中,对话代理的下一个行动及其论点是以对话背景和视觉背景共同为条件的。具体地说,我们提议了一个新的模式,可以在与用户查询的视觉实体的对话中解释视觉背景,并传播API参数。我们的模型可以识别视觉特征,如颜色和形状,以及基于元数据的特征,如与视觉实体相关的价格或星级评级。为了培训我们的模型,由于缺乏合适的多式对话数据集,我们还提议了一个新的多式对话模拟器,以生成合成数据,并从MTurk收集现实的用户数据,以提高模型的稳健性。提议的模型实现了合理的85%模型准确性,而没有高温定度。我们还在虚拟助理的半式家具采购经验中展示了拟议的方法。