Self-supervised learning (SSL) models reshaped our approach to speech, language and vision. However their huge size and the opaque relations between their layers and tasks result in slow inference and network overthinking, where predictions made from the last layer of large models is worse than those made from intermediate layers. Early exit (EE) strategies can solve both issues by dynamically reducing computations at inference time for certain samples. Although popular for classification tasks in vision and language, EE has seen less use for sequence-to-sequence speech recognition (ASR) tasks where outputs from early layers are often degenerate. This challenge is further compounded when speech SSL models are applied on out-of-distribution (OOD) data. This paper first shows that SSL models do overthinking in ASR. We then motivate further research in EE by computing an optimal bound for performance versus speed trade-offs. To approach this bound we propose two new strategies for ASR: (1) we adapt the recently proposed patience strategy to ASR; and (2) we design a new EE strategy specific to ASR that performs better than all strategies previously introduced.
翻译:自我监督的学习模式(SSL)改变了我们的语言、语言和视觉方法,但是其巨大规模及其层层和任务之间的不透明关系导致了缓慢的推论和网络过度思考,而从大模型最后一层的预测比中间层的预测差。早期退出(EE)战略可以通过对某些样本的推论时间动态地减少计算来解决这两个问题。虽然对于愿景和语言的分类任务来说很受欢迎,但EE认为在从早期层次到顺序的语音识别任务中,最初层次的产出往往退化。当将演讲的SSL模型应用到分配之外的数据时,这一挑战就变得更加复杂。本文首先显示,SSL模型在ASR中做过度思考。然后,我们通过计算最佳性能约束与快速交易的时间来激励对EE的进一步研究。为了接近这一点,我们为ASR提出了两项新的战略:(1)我们根据ASR调整了最近提出的耐心战略;(2)我们为ASR设计新的EE战略,其表现比以前提出的所有战略都好。