Although current neural text-to-speech (TTS) models are able to generate high-quality speech, intensity controllable emotional TTS is still a challenging task. Most existing methods need external optimizations for intensity calculation, leading to suboptimal results or degraded quality. In this paper, we propose EmoDiff, a diffusion-based TTS model where emotion intensity can be manipulated by a proposed soft-label guidance technique derived from classifier guidance. Specifically, instead of being guided with a one-hot vector for the specified emotion, EmoDiff is guided with a soft label where the value of the specified emotion and \textit{Neutral} is set to $\alpha$ and $1-\alpha$ respectively. The $\alpha$ here represents the emotion intensity and can be chosen from 0 to 1. Our experiments show that EmoDiff can precisely control the emotion intensity while maintaining high voice quality. Moreover, diverse speech with specified emotion intensity can be generated by sampling in the reverse denoising process.
翻译:虽然当前的神经文字语音模型能够产生高质量的言语质量,但强度可控情绪TTS仍是一项艰巨的任务。 大多数现有方法都需要外部优化来计算强度,导致低于最佳结果或质量退化。 在本文中,我们提议EmoDiff, 这是一种基于扩散的TTS模型, 其情感强度可以通过从分类指导中衍生出来的软标签指导技术来控制。 具体来说, EmoDiff 不是以一热矢量来指导特定情感, 而是使用软标签引导 EmoDiff, 其中指定的情感和\textit{ Neutral} 的价值分别设定为$\alpha$和$1\\alpha$。 这里的 $\alpha$代表情感强度, 可以从 0 到 1. 我们的实验显示 EmoDiff 可以在保持高声音质量的同时精确控制情绪强度。 此外, 以特定情感强度的多种语言可以通过反除色过程取样产生。