We present EdiTTS, an off-the-shelf speech editing methodology based on score-based generative modeling for text-to-speech synthesis. EdiTTS allows for targeted, granular editing of audio, both in terms of content and pitch, without the need for any additional training, task-specific optimization, or architectural modifications to the score-based model backbone. Specifically, we apply coarse yet deliberate perturbations in the Gaussian prior space to induce desired behavior from the diffusion model, while applying masks and softening kernels to ensure that iterative edits are applied only to the target region. Through listening tests and speech-to-text back transcription, we show that EdiTTS outperforms existing baselines and produces robust samples that satisfy user-imposed requirements.
翻译:我们展示了EdiTTS(EdiTTS),这是一个以基于分数的超现语音编辑模型为基础,用于文本到语音合成的现成语音编辑方法。 EdiTTS允许在内容和音调方面对音频进行有针对性的粒子编辑,而无需对基于分数的模型主干线进行任何额外的培训、特定任务优化或建筑修改。 具体地说,我们在Gaussian前的空间应用粗糙但蓄意的扰动,从扩散模型中诱发人们期望的行为,同时使用面罩和软软内核,以确保迭代编辑仅适用于目标区域。我们通过监听测试和语音到文字转录,显示EdiTTS超越了现有基线,并生成了符合用户要求的可靠样本。