With the number of smart devices increasing, the demand for on-device text-to-speech (TTS) increases rapidly. In recent years, many prominent End-to-End TTS methods have been proposed, and have greatly improved the quality of synthesized speech. However, to ensure the qualified speech, most TTS systems depend on large and complex neural network models, and it's hard to deploy these TTS systems on-device. In this paper, a small-footprint, fast, stable network for on-device TTS is proposed, named as DeviceTTS. DeviceTTS makes use of a duration predictor as a bridge between encoder and decoder so as to avoid the problem of words skipping and repeating in Tacotron. As we all know, model size is a key factor for on-device TTS. For DeviceTTS, Deep Feedforward Sequential Memory Network (DFSMN) is used as the basic component. Moreover, to speed up inference, mix-resolution decoder is proposed for balance the inference speed and speech quality. Experiences are done with WORLD and LPCNet vocoder. Finally, with only 1.4 million model parameters and 0.099 GFLOPS, DeviceTTS achieves comparable performance with Tacotron and FastSpeech. As far as we know, the DeviceTTS can meet the needs of most of the devices in practical application.
翻译:智能设备数量不断增加,对智能设备的需求迅速增加。近年来,提出了许多著名的端到端语音技术方法,并大大提高了合成语音的质量。然而,为了确保语言合格,大多数TTS系统依赖于大型和复杂的神经网络模型,很难将这些TTS系统安装在设计上。本文提出了一个小脚印、快速、稳定的TTTS网络,称为“设备TTS”。设备TTS使用一个期限预测器,作为编码器和解码器之间的桥梁,以避免在塔科特罗出现跳过和重复的词的问题。我们都知道,模型大小是TTTS系统使用大型和复杂的神经网络模型的一个关键因素。对于设备TTTS系统,将深喂向前序列记忆网络(DFSMN)用作基本组成部分。此外,为了加快判断,建议混合解析器能够平衡语音模型和语音质量之间的平衡。AsmalS和LPCSFATS最接近性能标准,AsldS 与GPTTS 和SFTFCS 系统最相近40的运行标准。