Sequence-to-Sequence Text-to-Speech architectures that directly generate low level acoustic features from phonetic sequences are known to produce natural and expressive speech when provided with adequate amounts of training data. Such systems can learn and transfer desired speaking styles from one seen speaker to another (in multi-style multi-speaker settings), which is highly desirable for creating scalable and customizable Human-Computer Interaction systems. In this work we explore one-to-many style transfer from a dedicated single-speaker conversational corpus with style nuances and interjections. We elaborate on the corpus design and explore the feasibility of such style transfer when assisted with Voice-Conversion-based data augmentation. In a set of subjective listening experiments, this approach resulted in high-fidelity style transfer with no quality degradation. However, a certain voice persona shift was observed, requiring further improvements in voice conversion.
翻译:直接产生音频序列低声学特征的文本到语音结构,在提供足够数量的培训数据时,已知能够产生自然和表达式的语音,这些系统可以学习和将理想的语音风格从一个可见的扬声器(多式多声器设置)转移到另一个声音器(多式多声器设置),这对于创建可缩放和可定制的人类计算机互动系统非常可取。在这项工作中,我们探索了从一个专用的单声器语音系统、带有时尚细微和插口的一到多式传输。我们详细介绍了体形设计,并探索了在借助基于语音转换的数据增强的情况下,这种风格转换的可行性。在一系列主观的倾听实验中,这一方法导致了高非异性风格的传输,没有质量退化。但是,观察到了某种声音人的变换,需要在语音转换方面进一步改进。