In recent years, the standard hybrid DNN-HMM speech recognizers are outperformed by the end-to-end speech recognition systems. One of the very promising approaches is the grapheme Wav2Vec 2.0 model, which uses the self-supervised pretraining approach combined with transfer learning of the fine-tuned speech recognizer. Since it lacks the pronunciation vocabulary and language model, the approach is suitable for tasks where obtaining such models is not easy or almost impossible. In this paper, we use the Wav2Vec speech recognizer in the task of spoken term detection over a large set of spoken documents. The method employs a deep LSTM network which maps the recognized hypothesis and the searched term into a shared pronunciation embedding space in which the term occurrences and the assigned scores are easily computed. The paper describes a bootstrapping approach that allows the transfer of the knowledge contained in traditional pronunciation vocabulary of DNN-HMM hybrid ASR into the context of grapheme-based Wav2Vec. The proposed method outperforms the previously published system based on the combination of the DNN-HMM hybrid ASR and phoneme recognizer by a large margin on the MALACH data in both English and Czech languages.
翻译:近年来,标准的混合 DNN-HMM 语音识别器在终端到终端语音识别系统上的表现超过了标准混合 DNNN-HMM 语音识别器。非常有希望的方法之一是图形化的Wav2Vec2.0 模式,该模式使用自监管的预培训方法,同时传授微调语音识别器;由于它缺乏发音词汇和语言模式,该方法适合于获取这些模型并非易事或几乎不可能的任务。在本文中,我们使用Wav2Vec语音识别器对一大批口语文档进行口语探测。该方法使用深 LSTM 网络,将公认的假设和搜索术语映射成一个共同的读音嵌入空间,在其中可以轻松计算术语和指定分数。由于缺少发音词汇词汇和语言表,因此该方法适合于将DNNNM-HMM混合的传统的读音词汇中所包含的知识传输到基于Sgomee Wav2Vec 的语音识别器。拟议方法超越了先前公布的系统,而该系统是以MAR-HMMAMAMAMAM 和MTM AM AS 的大型电话识别数据的组合为基础。