Arguably, the visual perception of conversational agents to the physical world is a key way for them to exhibit the human-like intelligence. Image-grounded conversation is thus proposed to address this challenge. Existing works focus on exploring the multimodal dialog models that ground the conversation on a given image. In this paper, we take a step further to study image-grounded conversation under a fully open-ended setting where no paired dialog and image are assumed available. Specifically, we present Maria, a neural conversation agent powered by the visual world experiences which are retrieved from a large-scale image index. Maria consists of three flexible components, i.e., text-to-image retriever, visual concept detector and visual-knowledge-grounded response generator. The retriever aims to retrieve a correlated image to the dialog from an image index, while the visual concept detector extracts rich visual knowledge from the image. Then, the response generator is grounded on the extracted visual knowledge and dialog context to generate the target response. Extensive experiments demonstrate Maria outperforms previous state-of-the-art methods on automatic metrics and human evaluation, and can generate informative responses that have some visual commonsense of the physical world.
翻译:可以说,对物理世界的谈话代理人的视觉感知是他们展示像人一样的智能的关键方式。 因此,建议通过基于图像的对话来应对这一挑战。 现有的工作重点是探索以特定图像为对话基础的多式对话模式。 在本文中,我们进一步在完全开放的环境下研究以图像为基础的对话,在这个环境中没有配对的对话框和图像的假设。 具体地说, 我们介绍Maria, 由视觉世界经验驱动的神经对话代理人, 从大型图像索引中提取出来。 Maria 由三种灵活的组件组成, 即文字到图像检索器、视觉概念探测器和视觉- 知识- 地面反应生成器。 检索器的目的是从图像索引中检索与对话相关的图像, 而视觉概念探测器则从图像中提取丰富的视觉知识。 然后, 反应器以提取的视觉知识和对话背景为基础, 以产生目标响应。 广泛的实验显示, Maria 超越了先前在自动计量仪和人类评价方面的状态- 方法, 并且能够产生一些具有共见识的视觉反应。