Several high-resource Text to Speech (TTS) systems currently produce natural, well-established human-like speech. In contrast, low-resource languages, including Arabic, have very limited TTS systems due to the lack of resources. We propose a fully unsupervised method for building TTS, including automatic data selection and pre-training/fine-tuning strategies for TTS training, using broadcast news as a case study. We show how careful selection of data, yet smaller amounts, can improve the efficiency of TTS system in generating more natural speech than a system trained on a bigger dataset. We adopt to propose different approaches for the: 1) data: we applied automatic annotations using DNSMOS, automatic vowelization, and automatic speech recognition (ASR) for fixing transcriptions' errors; 2) model: we used transfer learning from high-resource language in TTS model and fine-tuned it with one hour broadcast recording then we used this model to guide a FastSpeech2-based Conformer model for duration. Our objective evaluation shows 3.9% character error rate (CER), while the groundtruth has 1.3% CER. As for the subjective evaluation, where 1 is bad and 5 is excellent, our FastSpeech2-based Conformer model achieved a mean opinion score (MOS) of 4.4 for intelligibility and 4.2 for naturalness, where many annotators recognized the voice of the broadcaster, which proves the effectiveness of our proposed unsupervised method.
翻译:一些高资源文本到语音系统(TTS)目前产生自然的、成熟的、人文化的语音。相反,低资源语言,包括阿拉伯语,由于缺乏资源,其TTS系统非常有限。我们提出一种完全不受监督的建立TTS系统的方法,包括自动数据选择和TTS培训培训的训练前/调整战略,使用广播新闻作为案例研究。我们展示了如何仔细选择数据,但数量较少,可以提高TTS系统生成更自然的语音的效率,而不是一个在更大的数据集上培训的系统。我们采用了不同的方法:1)数据:1)数据:我们使用DNSMOS、自动誓言化和自动语音识别(ASR)来使用自动说明系统系统,以纠正抄录错误;2)模型:我们使用高资源语言在TSTS模型中进行传输学习,并用一小时的广播记录对其进行精细调整,然后我们用这个模型来指导一个快速的Spech2-基于配置的语音模型。我们的客观评价显示3.9%的性格错误率,而基于地面的错误率为1.3-CER。对于主观评价来说,一个快速的成绩评分数评评为“5”的自然评分方法,在这个模型中,其中,一个最精确的正确评分数为“5和最精确的评分。