This survey provides an overview of the evolution of visually grounded models of spoken language over the last 20 years. Such models are inspired by the observation that when children pick up a language, they rely on a wide range of indirect and noisy clues, crucially including signals from the visual modality co-occurring with spoken utterances. Several fields have made important contributions to this approach to modeling or mimicking the process of learning language: Machine Learning, Natural Language and Speech Processing, Computer Vision and Cognitive Science. The current paper brings together these contributions in order to provide a useful introduction and overview for practitioners in all these areas. We discuss the central research questions addressed, the timeline of developments, and the datasets which enabled much of this work. We then summarize the main modeling architectures and offer an exhaustive overview of the evaluation metrics and analysis techniques.
翻译:本调查概述了过去20年来以视觉为基础的口语模式的演变情况,这些模式的灵感来自以下观察,即当儿童学习一种语言时,他们依赖广泛的间接和吵闹的线索,关键包括视觉模式与口语同时出现的信号。几个领域为这种模式建模或模仿学习语言的过程作出了重要贡献:机器学习、自然语言和语言处理、计算机视觉和认知科学。本文件汇集了这些贡献,以便为所有这些领域的从业人员提供有用的介绍和概览。我们讨论了所涉及的核心研究问题、发展时间表以及促成这项工作的许多工作的数据集。我们随后总结了主要的建模结构,并详尽概述了评价指标和分析技术。