Neural text to speech (TTS) generally consists of cascaded architecture with separately optimized acoustic model and vocoder or end-to-end architecture with continuous mel-spectrograms or self-extracted speech frames as the intermediate representations to bridge acoustic model and vocoder in joint training, which suffers from two limitations: 1) the continuous acoustic frames are hard to predict with phoneme only, acoustic information are also needed like duration or pitch to solve the one-to-many problem, which is not easy to scale on large scale and noise dataset; 2) diverse speech output is not straightforward with continuous speech features and complex VAE or flow based models are often needed. In this paper, we propose FoundationTTS, a new speech synthesis system with discrete speech tokens extraction from neural audio codec and a large language modelling based acoustic model for simultaneously optimizing linguistic and acoustic tokens. Specifically, 1) we propose a hierarchical codec network based on vector-quantized auto-encoders with adversarial training (VQ-GAN) to first extract continuous frame-level speech representations with fine-grained codec, and the coarse-grained codec reconstructs the continuous speech frame with fewer quantizers; 2) we jointly optimize speech token, linguistic tokens, speaker token together with a large language model and autoregressively predict the discrete speech tokens. Experiments show that FoundationTTS achieves a MOS gain of +0.14 compared to the baseline system. In ASR customization tasks, our method achieves 7.09\% and 10.35\% WERR respectively over two strong customized ASR baselines.
翻译:语音语音文字(TTS)通常由级联结构组成,有单独优化的声学模型和伏记或端到端结构,有连续流模谱或自提取语音框架,作为连接声学模型和联合培训中vocoder的中间表示器,这有两个限制:(1) 连续声学框架很难仅用电话来预测,声学信息也像持续时间或声道一样,以解决一对式问题,这在大尺度和噪音数据集上不容易缩小规模;(2) 多种语音产出并非直截了当,具有连续语音特征,复杂的VAE或流基模型经常需要。在本文件中,我们提议建立新的语音综合系统,从神经音调调调调调调调调调离语音模型,以及基于声学符号同时优化的大型声学模型。 具体地说,1我们提议一个基于矢量定量自动电解调自动解码网络,在大型阵列内进行对抗性语言训练(VQQ-GAN),首先用精细的基调调调调调调语言基线和流基调语言基础,共同进行模拟语言格式化。</s>