We present SpecAugment, a simple data augmentation method for speech recognition. SpecAugment is applied directly to the feature inputs of a neural network (i.e., filter bank coefficients). The augmentation policy consists of warping the features, masking blocks of frequency channels, and masking blocks of time steps. We apply SpecAugment on Listen, Attend and Spell networks for end-to-end speech recognition tasks. We achieve state-of-the-art performance on the LibriSpeech 960h and Swichboard 300h tasks, outperforming all prior work. On LibriSpeech, we achieve 6.8% WER on test-other without the use of a language model, and 5.8% WER with shallow fusion with a language model. This compares to the previous state-of-the-art hybrid system of 7.5% WER. For Switchboard, we achieve 7.2%/14.6% on the Switchboard/CallHome portion of the Hub5'00 test set without the use of a language model, and 6.8%/14.1% with shallow fusion, which compares to the previous state-of-the-art hybrid system at 8.3%/17.3% WER.
翻译:我们展示了语音识别的简单数据增强方法 SpecAugment 。 SpecAugment 直接应用于神经网络的特性输入( 过滤银行系数 ) 。 增强政策包括扭曲功能、 隐藏频率频道块块和隐藏时间步骤块 。 我们应用了收听、 出场和拼字网络的分解, 用于终端到终端语音识别任务 。 我们在LibriSpeech 960h 和 Swichboard 300h 上完成最先进的功能, 超过先前的所有工作。 在 LibriSpeech 上, 我们实现了6.8%的测试结果, 没有使用语言模型, 并且5.8%的WER 与语言模型的浅相融合。 与先前的7. 5% WER 的状态混合系统相比, 我们实现了7.2%/ 14.6%的开关/ CallHome部分, 设置的 Hub5' 00 测试没有使用语言模型, 和 6.8%/ 14.1% 与浅质混和8.3% ER 的混合系统相比, 。