State-of-the-art Variational Auto-Encoders (VAEs) for learning disentangled latent representations give impressive results in discovering features like pitch, pause duration, and accent in speech data, leading to highly controllable text-to-speech (TTS) synthesis. However, these LSTM-based VAEs fail to learn latent clusters of speaker attributes when trained on either limited or noisy datasets. Further, different latent variables start encoding the same features, limiting the control and expressiveness during speech synthesis. To resolve these issues, we propose RTI-VAE (Reordered Transformer with Information reduction VAE) where we minimize the mutual information between different latent variables and devise a modified Transformer architecture with layer reordering to learn controllable latent representations in speech data. We show that RTI-VAE reduces the cluster overlap of speaker attributes by at least 30\% over LSTM-VAE and by at least 7\% over vanilla Transformer-VAE.
翻译:用于学习分解的高级变异自动编码器(VAE),在发现音频数据中诸如音速、暂停时间和口音等特征方面产生令人印象深刻的结果,导致高度可控文本到语音合成(TTS),然而,这些基于LSTM的VAE在接受有限或噪音数据集培训时,未能了解潜在声频群。此外,不同的潜伏变量开始对相同特征进行编码,限制语音合成过程中的控制和表达性。为了解决这些问题,我们提议RTI-VAE(重新排序带有信息减少VAE的变异器)尽可能减少不同潜在变异器之间的相互信息,并设计一个修改的变异器结构,以重新排序在语音数据中学习可控潜在表达方式。我们表明,RTI-VAE将声频群特性的重叠减少至少30 ⁇ 以上LSTM-VAE,以及至少7 ⁇ Vanilla变异器-VAE。