Audio-visual (AV) lip biometrics is a promising authentication technique that leverages the benefits of both the audio and visual modalities in speech communication. Previous works have demonstrated the usefulness of AV lip biometrics. However, the lack of a sizeable AV database hinders the exploration of deep-learning-based audio-visual lip biometrics. To address this problem, we compile a moderate-size database using existing public databases. Meanwhile, we establish the DeepLip AV lip biometrics system realized with a convolutional neural network (CNN) based video module, a time-delay neural network (TDNN) based audio module, and a multimodal fusion module. Our experiments show that DeepLip outperforms traditional speaker recognition models in context modeling and achieves over 50% relative improvements compared with our best single modality baseline, with an equal error rate of 0.75% and 1.11% on the test datasets, respectively.
翻译:视听(AV)嘴唇生物鉴别技术是一种很有希望的认证技术,它利用了语音通信中的音频和视觉模式的好处。以前的工作已经证明了AV唇生物鉴别技术的有用性。然而,缺乏一个规模庞大的AV数据库阻碍了对基于深层学习的视听嘴唇生物鉴别技术的探索。为了解决这一问题,我们利用现有公共数据库汇编了一个中等规模的数据库。与此同时,我们建立了DeepLip AV嘴唇生物鉴别系统,该系统是通过一个以神经神经网络为基础的动态视频模块、一个以时隔神经网络为基础的音频模块和一个多式聚合模块实现的。我们的实验显示,DeepLip在背景建模中超越了传统语音识别模式,并且与我们的最佳单一模式基线相比,实现了50%以上的相对改进,测试数据集的误差率分别为0.75%和1.11%。