Transformer-based self-supervised models are trained as feature extractors and have empowered many downstream speech tasks to achieve state-of-the-art performance. However, both the training and inference process of these models may encounter prohibitively high computational cost and large parameter budget. Although Parameter Sharing Strategy (PSS) proposed in ALBERT paves the way for parameter reduction, the computation required remains the same. Interestingly, we found in experiments that distributions of feature embeddings from different Transformer layers are similar when PSS is integrated: a property termed as Layer Consistency (LC) in this paper. Given this similarity of feature distributions, we assume that feature embeddings from different layers would have similar representing power. In this work, Layer Consistency enables us to adopt Transformer-based models in a more efficient manner: the number of Conformer layers in each training iteration could be uniformly sampled and Shallow Layer Inference (SLI) could be applied to reduce the number of layers in inference stage. In experiments, our models are trained with LibriSpeech dataset and then evaluated on both phone classification and Speech Recognition tasks. We experimentally achieve 7.8X parameter reduction, 41.9% training speedup and 37.7% inference speedup while maintaining comparable performance with conventional BERT-like self-supervised methods.
翻译:以变压器为基础的自我监督模型作为地物提取器接受培训,并赋予许多下游语言任务以达到最新性能。然而,这些模型的培训和推断过程可能遇到令人望而却步的高昂计算成本和大参数预算。尽管ALBERT中提议的参数共享战略(PSS)为降低参数铺平了道路,但计算要求保持不变。有趣的是,我们在实验中发现,不同变压器层的地物嵌入分布在整合PSS时是相似的:本文中称为层一致性(LC)的属性。鉴于地物分布的相似性,我们假设不同层的地物嵌将具有类似的代表力量。在这项工作中,图层一致性使我们能够以更有效的方式采用基于变压器的模型:每次培训的同源层数量可以统一抽样,浅层推断(SLI)可以用来减少推断阶段的层数。在实验中,我们模型将LibSpeech数据配置和不同层的地貌特征嵌化数据配置进行类似的培训,然后在实验性标中,我们用可比较性化的进度模型评估了37(BSER)的进度和速度,我们用实验性实验性评估了7-9的进度,然后用实验性能的进度降低了的进度进行了自我分析。