In this paper, we present a deep-learning-based framework for audio-visual speech inpainting, i.e., the task of restoring the missing parts of an acoustic speech signal from reliable audio context and uncorrupted visual information. Recent work focuses solely on audio-only methods and generally aims at inpainting music signals, which show highly different structure than speech. Instead, we inpaint speech signals with gaps ranging from 100 ms to 1600 ms to investigate the contribution that vision can provide for gaps of different duration. We also experiment with a multi-task learning approach where a phone recognition task is learned together with speech inpainting. Results show that the performance of audio-only speech inpainting approaches degrades rapidly when gaps get large, while the proposed audio-visual approach is able to plausibly restore missing information. In addition, we show that multi-task learning is effective, although the largest contribution to performance comes from vision.
翻译:在本文中,我们提出了一个基于深层次学习的视听语言绘画框架,即从可靠的音频背景和不间断的视觉信息中恢复音频语音信号缺失部分的任务。最近的工作仅侧重于音频专用方法,一般目的是绘制音乐信号,这些信号的结构与语言有很大不同。相反,我们用100米到1600米之间的差距来绘制语音信号,以调查愿景能够为不同时间段的缺口做出的贡献。我们还试验一种多任务学习方法,通过该方法,在电话识别任务与语音绘画一起学习。结果显示,在差距扩大时,只音频语音绘画方法的表现会迅速退化,而拟议的视听方法能够令人赞叹地恢复缺失的信息。此外,我们显示多任务学习是有效的,尽管对业绩的最大贡献来自愿景。