This paper introduces a new multi-modal dataset for visual and audio-visual speech recognition. It includes face tracks from over 400 hours of TED and TEDx videos, along with the corresponding subtitles and word alignment boundaries. The new dataset is substantially larger in scale compared to other public datasets that are available for general research.
翻译:本文介绍了一个新的视频和视听语音识别多模式数据集。 它包含来自400多小时的TED和TEDx视频的面部跟踪,以及相应的字幕和单词对齐界限。 与可供一般研究使用的其他公共数据集相比,新的数据集的规模要大得多。