Some recent models for Text-to-Speech synthesis aim to transfer the prosody of a reference utterance to the generated target synthetic speech. This is done by using a learned embedding of the reference utterance, which is used to condition speech generation. During training, the reference utterance is identical to the target utterance. Yet, during synthesis, these models are often used to transfer prosody from a reference that differs from the text or speaker being synthesized. To address this inconsistency, we propose to use a different, but prosodically-related, utterance during training too. We believe this should encourage the model to learn to transfer only those characteristics that the reference and target have in common. If prosody transfer methods do indeed transfer prosody they should be able to be trained in the way we propose. However, results show that a model trained under these conditions performs significantly worse than one trained using the target utterance as a reference. To explain this, we hypothesize that prosody transfer models do not learn a transferable representation of prosody, but rather an utterance-level representation which is highly dependent on both the reference speaker and reference text.
翻译:文本到语音合成最近的一些模型旨在将一个参考词的假话转移到生成的目标合成言词上。 这样做的方法是使用一个知识化的参考词嵌入, 用来为语音生成提供条件。 培训期间, 参考词的表达方式与目标语句完全相同。 然而, 在合成期间, 这些模型常常用来从一个与文本或正在合成的演讲者不同的参考物中转移假话。 为了解决这一不一致问题, 我们提议在培训期间也使用一个不同但与逻辑相关的表达方式。 我们认为, 这应该鼓励该模型学习仅转让该引用和目标具有共同特征的特征。 如果假意转让方法确实能够以我们提议的方式转让它们。 但是, 结果表明, 在这种条件下培训的模型比以目标语句作为参考的受训者要差得多。 为了解释这一点, 我们假设假称, 假称假称假称假称假称假称假称假称假称假称, 假称转让模式不会学会一种可转让的代理法, 而是一种直观层次的表述方式, 它高度依赖参考演讲者和参考文本。</s>