Large datasets as required for deep learning of lip reading do not exist in many languages. In this paper we present the dataset GLips (German Lips) consisting of 250,000 publicly available videos of the faces of speakers of the Hessian Parliament, which was processed for word-level lip reading using an automatic pipeline. The format is similar to that of the English language LRW (Lip Reading in the Wild) dataset, with each video encoding one word of interest in a context of 1.16 seconds duration, which yields compatibility for studying transfer learning between both datasets. By training a deep neural network, we investigate whether lip reading has language-independent features, so that datasets of different languages can be used to improve lip reading models. We demonstrate learning from scratch and show that transfer learning from LRW to GLips and vice versa improves learning speed and performance, in particular for the validation set.
翻译:在许多语言中,没有深入学习唇读所需的大型数据集。在本文中,我们展示了由250 000个可公开获取的赫森议会发言人脸部视频组成的数据集GLips(德国嘴唇),这些视频是用自动管道处理的,用于文字水平的唇读。格式类似于英语LRW(野生语言的LRW阅读)数据集(LRW)格式,每个视频编码在1.16秒的时间内产生一个感兴趣的词,从而产生两个数据集之间学习转移学习的兼容性。我们通过培训一个深层神经网络,我们调查唇读是否具有依赖语言的特征,以便使用不同语言的数据集来改进唇读模式。我们展示了从零到零的学习,并表明从LRW到GLips和反向的学习可以提高学习速度和性能,特别是对于验证集而言。