A text-to-speech (TTS) model typically factorizes speech attributes such as content, speaker and prosody into disentangled representations.Recent works aim to additionally model the acoustic conditions explicitly, in order to disentangle the primary speech factors, i.e. linguistic content, prosody and timbre from any residual factors, such as recording conditions and background noise.This paper proposes unsupervised, interpretable and fine-grained noise and prosody modeling. We incorporate adversarial training, representation bottleneck and utterance-to-frame modeling in order to learn frame-level noise representations. To the same end, we perform fine-grained prosody modeling via a Fully Hierarchical Variational AutoEncoder (FVAE) which additionally results in more expressive speech synthesis.
翻译:文本到语音(TTS) 模式通常将内容、扬声器和手动动作等语言属性因素化为分解的表达方式。 近期的工作旨在进一步明确模拟声学条件,以便通过记录条件和背景噪音等任何残余因素,如语言内容、手动和音调(TTTS),将主要语言因素(即语言内容、手动和音调)与任何残留因素(如记录条件和背景噪音)分离开来。 本文建议采用不受监督、可解释和细微的噪音和动听式建模。 我们采用了对抗性培训、代表瓶颈和超音速到框架的建模,以便学习框架级的噪声表。 为了同一目的,我们通过全高度结构动动动动自动编码(FVAE)来进行精细的音化动动动模式模拟,这还导致更清晰的语音合成。