Advances in self-supervised learning have significantly reduced the amount of transcribed audio required for training. However, the majority of work in this area is focused on read speech. We explore limited supervision in the domain of conversational speech. While we assume the amount of in-domain data is limited, we augment the model with open source read speech data. The XLS-R model has been shown to perform well with limited adaptation data and serves as a strong baseline. We use untranscribed data for self-supervised learning and semi-supervised training in an autoregressive encoder-decoder model. We demonstrate that by using the XLS-R model for pseudotranscription, a much smaller autoregressive model can outperform a finetuned XLS-R model when transcribed in-domain data is limited, reducing WER by as much as 8% absolute.
翻译:自监管学习的进展大大减少了培训所需的转录音频数量。然而,这一领域的大部分工作都集中在读话上。我们探索了对谈话语音领域的有限监督。虽然我们假设了内域数据的数量有限,但我们用开放源阅读语音数据扩充了模型。XLS-R模型表现良好,适应数据有限,成为了强有力的基线。我们使用未经调节的数据进行自监管的学习和半监管的培训,用于自动递增的编码器-解码器模型。我们证明,如果使用XLS-R模型进行伪译,一个小得多的自动递减模型可以超过一个微调的 XLS-R模型,如果对内域数据的转录是有限的,则会将WER绝对值减少高达8%。