With the rise of deep learning and intelligent vehicle, the smart assistant has become an essential in-car component to facilitate driving and provide extra functionalities. In-car smart assistants should be able to process general as well as car-related commands and perform corresponding actions, which eases driving and improves safety. However, there is a data scarcity issue for low resource languages, hindering the development of research and applications. In this paper, we introduce a new dataset, Cantonese In-car Audio-Visual Speech Recognition (CI-AVSR), for in-car command recognition in the Cantonese language with both video and audio data. It consists of 4,984 samples (8.3 hours) of 200 in-car commands recorded by 30 native Cantonese speakers. Furthermore, we augment our dataset using common in-car background noises to simulate real environments, producing a dataset 10 times larger than the collected one. We provide detailed statistics of both the clean and the augmented versions of our dataset. Moreover, we implement two multimodal baselines to demonstrate the validity of CI-AVSR. Experiment results show that leveraging the visual signal improves the overall performance of the model. Although our best model can achieve a considerable quality on the clean test set, the speech recognition quality on the noisy data is still inferior and remains as an extremely challenging task for real in-car speech recognition systems. The dataset and code will be released at https://github.com/HLTCHKUST/CI-AVSR.
翻译:随着深层学习和智能车辆的兴起,智能助理已成为便利驾驶和提供额外功能的必备车内部分,智能助理应能够处理一般命令和与汽车有关的指令,并采取相应行动,从而方便驾驶,改善安全;然而,低资源语言的数据稀缺问题,阻碍了研究和应用的发展;在本文中,我们推出一个新的数据集,即广东汽车视听语音语音识别系统(Canosese In-car-VAVSR),供广东语以视频和音频数据进行内部指令识别,其中包括由30个本地广东语发言者记录的200部汽车指令(8.3小时)的4 984个样本。此外,我们利用普通汽车背景噪音来模拟真实环境的数据集,产生比所收集的数据大10倍的数据集。我们提供了清洁和扩充版数据集的详细统计数据。此外,我们实施了两个多式联运基线,以证明CI-AVSR的有效性。实验结果表明,利用视觉信号改进模型的整体性工作表现。尽管我们的最佳模型在高压语音识别系统上仍能达到相当高的质量。