With the success of neural network based modeling in automatic speech recognition (ASR), many studies investigated acoustic modeling and learning of feature extractors directly based on the raw waveform. Recently, one line of research has focused on unsupervised pre-training of feature extractors on audio-only data to improve downstream ASR performance. In this work, we investigate the usefulness of one of these front-end frameworks, namely wav2vec, in a setting without additional untranscribed data for hybrid ASR systems. We compare this framework both to the manually defined standard Gammatone feature set, as well as to features extracted as part of the acoustic model of an ASR system trained supervised. We study the benefits of using the pre-trained feature extractor and explore how to additionally exploit an existing acoustic model trained with different features. Finally, we systematically examine combinations of the described features in order to further advance the performance.
翻译:随着以自动语音识别(ASR)为模型的神经网络的成功,许多研究调查了声学模型和直接以原始波形为基础对地物提取器进行学习。最近,一行研究侧重于对地物提取器进行未经监督的仅掌握音频数据的预培训,以提高下游ASR的性能。在这项工作中,我们调查了这些前端框架之一,即 wav2vec,在没有为混合的ASR系统提供额外未经调试的数据的设置中的有用性。我们将这一框架与人工定义的标准伽马酮特征集以及作为经过监督的ASR系统声学模型的一部分所提取的特征进行比较。我们研究了使用预先培训的地物提取器的好处,并探索了如何进一步利用经过不同特征培训的现有声学模型。最后,我们系统地检查了所述特征的组合,以进一步推进性能。