Automatic recognition of disordered and elderly speech remains a highly challenging task to date due to the difficulty in collecting such data in large quantities. This paper explores a series of approaches to integrate domain adapted SSL pre-trained models into TDNN and Conformer ASR systems for dysarthric and elderly speech recognition: a) input feature fusion between standard acoustic frontends and domain adapted wav2vec2.0 speech representations; b) frame-level joint decoding of TDNN systems separately trained using standard acoustic features alone and with additional wav2vec2.0 features; and c) multi-pass decoding involving the TDNN/Conformer system outputs to be rescored using domain adapted wav2vec2.0 models. In addition, domain adapted wav2vec2.0 representations are utilized in acoustic-to-articulatory (A2A) inversion to construct multi-modal dysarthric and elderly speech recognition systems. Experiments conducted on the UASpeech dysarthric and DementiaBank Pitt elderly speech corpora suggest TDNN and Conformer ASR systems integrated domain adapted wav2vec2.0 models consistently outperform the standalone wav2vec2.0 models by statistically significant WER reductions of 8.22% and 3.43% absolute (26.71% and 15.88% relative) on the two tasks respectively. The lowest published WERs of 22.56% (52.53% on very low intelligibility, 39.09% on unseen words) and 18.17% are obtained on the UASpeech test set of 16 dysarthric speakers, and the DementiaBank Pitt test set respectively.
翻译:由于难以大量收集此类数据,本文探讨了一系列方法,将经领域调整的SSL预培训模型纳入经调整的STNN/Conde系统,以进行读音和老年人语音识别:(a) 标准声学前端和经调整的 wav2vec2.0 语音演示之间的输入特征融合;(b) 单独使用标准声学特征并附加 wav2vec2.0 功能单独培训的TDNNN系统框架级联合解码;以及(c) 涉及TDNN/Conex系统产出的多处解码,使用经调整的 STN和Confreed ASR系统Wv22.0 模型进行重新校正。此外,经调整的Wv2c2.0 显示,在声学至电解系统(A2A2A)中,用于构建多调调调调调调调和老年语音识别系统。在UASpeech Tudsath 和 Devakt US2 低调的言者中进行了实验。</s>