Self-supervised speech representation learning (speech SSL) has demonstrated the benefit of scale in learning rich representations for Automatic Speech Recognition (ASR) with limited paired data, such as wav2vec 2.0. We investigate the existence of sparse subnetworks in pre-trained speech SSL models that achieve even better low-resource ASR results. However, directly applying widely adopted pruning methods such as the Lottery Ticket Hypothesis (LTH) is suboptimal in the computational cost needed. Moreover, we show that the discovered subnetworks yield minimal performance gain compared to the original dense network. We present Prune-Adjust-Re-Prune (PARP), which discovers and finetunes subnetworks for much better performance, while only requiring a single downstream ASR finetuning run. PARP is inspired by our surprising observation that subnetworks pruned for pre-training tasks need merely a slight adjustment to achieve a sizeable performance boost in downstream ASR tasks. Extensive experiments on low-resource ASR verify (1) sparse subnetworks exist in mono-lingual/multi-lingual pre-trained speech SSL, and (2) the computational advantage and performance gain of PARP over baseline pruning methods. In particular, on the 10min Librispeech split without LM decoding, PARP discovers subnetworks from wav2vec 2.0 with an absolute 10.9%/12.6% WER decrease compared to the full model. We further demonstrate the effectiveness of PARP via: cross-lingual pruning without any phone recognition degradation, the discovery of a multi-lingual subnetwork for 10 spoken languages in 1 finetuning run, and its applicability to pre-trained BERT/XLNet for natural language tasks.
翻译:自我监督的语音演示学习( SSL) 展示了在学习自动语音识别( ASR) 的丰富演示中学习内容丰富的自动语音识别(ASR) 所带来的规模的好处, 其配对数据有限, 例如 wav2vec 2. 。 我们调查了在经过训练的语音 SSL 模型中存在稀疏的子网络, 其结果是更低的资源 ASR 。 然而, 直接应用广泛采用的微调方法, 如 LTH, 其计算成本不尽理想。 此外, 我们显示, 与原始密度网络相比, 所发现的子网络的性能增益最小。 我们展示了 Prune- Adjust-Rebrat- Rebrane (PARPP) 的实用性能。 我们发现和微调的子网络, 只需要单下游的 ASR( LTH), 用于培训前任务的子网络只需要稍微调整, 才能在下游的 ASR 任务中实现显著的性能提升。 。 在低资源 ASR 校程 校程校验中, 在单/ 多语言前 ROPLSLM 10 基 上, 将 的 的 10 的 的 的 和 快速 的 递增 10 CD 10 底的 的PLSLSLASR 上, 的性能 10 。