Dysarthria is a motor speech disorder often characterized by reduced speech intelligibility through slow, uncoordinated control of speech production muscles. Automatic Speech recognition (ASR) systems may help dysarthric talkers communicate more effectively. To have robust dysarthria-specific ASR, sufficient training speech is required, which is not readily available. Recent advances in Text-To-Speech (TTS) synthesis multi-speaker end-to-end TTS systems suggest the possibility of using synthesis for data augmentation. In this paper, we aim to improve multi-speaker end-to-end TTS systems to synthesize dysarthric speech for improved training of a dysarthria-specific DNN-HMM ASR. In the synthesized speech, we add dysarthria severity level and pause insertion mechanisms to other control parameters such as pitch, energy, and duration. Results show that a DNN-HMM model trained on additional synthetic dysarthric speech achieves WER improvement of 12.2% compared to the baseline, the addition of the severity level and pause insertion controls decrease WER by 6.5%, showing the effectiveness of adding these parameters. Audio samples are available at
翻译:DySarthria是一种运动性言语障碍,其特点是通过慢、不协调地控制语音制作肌肉来降低语音的洞察力。自动语音识别系统可以帮助听力谈话者更有效地交流。要保持强大的反沙社特有的反沙社,需要有足够的培训演讲,但这种演讲不容易获得。在文本到语音合成多声合成多声器终端到终端TTTS系统方面的最新进展表明有可能使用合成来增强数据。在本文中,我们的目标是改进多声终端到终端 TTS系统,以合成听力演讲,改进对DNN-HMM ASR的强化培训。在合成演讲中,我们需要将“沙社”的严厉程度和暂停插入机制添加到其他控制参数,如声速、能源和持续时间等。结果显示,在额外合成振荡演讲方面受过培训的DNNS-HMMM模型比基线改进了12.2%,增加的强度水平和暂停插入控制使可使用的WER6.5%的频率降低。