Despite rapid progress in the recent past, current speech recognition systems still require labeled training data which limits this technology to a small fraction of the languages spoken around the globe. This paper describes wav2vec-U, short for wav2vec Unsupervised, a method to train speech recognition models without any labeled data. We leverage self-supervised speech representations to segment unlabeled audio and learn a mapping from these representations to phonemes via adversarial training. The right representations are key to the success of our method. Compared to the best previous unsupervised work, wav2vec-U reduces the phoneme error rate on the TIMIT benchmark from 26.1 to 11.3. On the larger English Librispeech benchmark, wav2vec-U achieves a word error rate of 5.9 on test-other, rivaling some of the best published systems trained on 960 hours of labeled data from only two years ago. We also experiment on nine other languages, including low-resource languages such as Kyrgyz, Swahili and Tatar.
翻译:尽管最近取得了迅速的进展,但目前的语音识别系统仍需要贴上标签的培训数据,将这一技术限制在全球使用的语言中的一小部分。本文描述了 wav2vec-U, 短于 wav2vec-U, 短于 wav2vec unguarded, 这是在没有任何标签数据的情况下培训语音识别模型的一种方法。 我们利用自我监督的语音表达方式将无标签的音频部分与通过对抗性培训从这些表达方式到电话的绘图相匹配。 正确的表达方式是我们方法成功的关键。 与以往的最佳未经监督的工作相比, wav2vec-U 将TIMIT基准上的电话错误率从26.1降至11.3 。 在更大的英文Librispeech基准上, wav2vec-U在测试其他基准上达到5.9的字差率, 与两年前仅受过960小时标签数据培训的一些最佳公布系统相比。 我们还试验了其他九种语言, 包括吉尔吉斯语、斯瓦希里语和鞑语。