Spontaneous speech has many affective and pragmatic functions that are interesting and challenging to model in TTS (text-to-speech). However, the presence of reduced articulation, fillers, repetitions, and other disfluencies mean that text and acoustics are less well aligned than in read speech. This is problematic for attention-based TTS. We propose a TTS architecture that is particularly suited for rapidly learning to speak from irregular and small datasets while also reproducing the diversity of expressive phenomena present in spontaneous speech. Specifically, we modify an existing neural HMM-based TTS system, which is capable of stable, monotonic alignments for spontaneous speech, and add utterance-level prosody control, so that the system can represent the wide range of natural variability in a spontaneous speech corpus. We objectively evaluate control accuracy and perform a subjective listening test to compare to a system without prosody control. To exemplify the power of combining mid-level prosody control and ecologically valid data for reproducing intricate spontaneous speech phenomena, we evaluate the system's capability of synthesizing two types of creaky phonation. Audio samples are available at https://hfkml.github.io/pc_nhmm_tts/
翻译:自发言论有许多令人感知和务实的功能,这些功能在TTS(文本到语音)中是令人感兴趣和具有挑战性的。然而,由于信号的表达、填充、重复和其他分解的减少,文本和声学的匹配程度不如读音。这对以注意力为基础的TTS有问题。我们建议TTS结构特别适合快速学习,从非常规和小型数据集中发言,同时复制自发言论中出现的表达现象的多样性。具体地说,我们修改一个以 HMM 为基础的现有神经 TTS系统,该系统能够稳定、单调调调自发讲话,并增加发音级别的超声波控制,因此该系统可以在自发语音系统中代表广泛的自然变异性。我们客观地评价控制准确性,并进行主观的听觉测试,以便与一个没有自我操控控制的系统进行比较。为了展示中层节能控制和再生成复杂自发言论现象的生态有效数据的力量,我们评估该系统在两种类型CUB/CMUPHS/AMPMS/SALs上合成的能力。