Given a piece of speech and its transcript text, text-based speech editing aims to generate speech that can be seamlessly inserted into the given speech by editing the transcript. Existing methods adopt a two-stage approach: synthesize the input text using a generic text-to-speech (TTS) engine and then transform the voice to the desired voice using voice conversion (VC). A major problem of this framework is that VC is a challenging problem which usually needs a moderate amount of parallel training data to work satisfactorily. In this paper, we propose a one-stage context-aware framework to generate natural and coherent target speech without any training data of the target speaker. In particular, we manage to perform accurate zero-shot duration prediction for the inserted text. The predicted duration is used to regulate both text embedding and speech embedding. Then, based on the aligned cross-modality input, we directly generate the mel-spectrogram of the edited speech with a transformer-based decoder. Subjective listening tests show that despite the lack of training data for the speaker, our method has achieved satisfactory results. It outperforms a recent zero-shot TTS engine by a large margin.
翻译:鉴于一个语音及其文字文本,基于文本的语音编辑旨在生成能够通过编辑文字誊本无缝无缝地插入到特定语言中的语音,现有方法采取两阶段办法:使用通用文本到语音(TTS)引擎将输入文本合成,然后使用语音转换(VC)将声音转换为理想声音。这个框架的一个主要问题是,VC是一个具有挑战性的问题,通常需要适度的平行培训数据才能令人满意地发挥作用。在本文中,我们提议一个一阶段的背景意识框架,在没有目标演讲者的任何培训数据的情况下生成自然和连贯的目标演讲。特别是,我们设法对插入的文本进行准确的零发时间预测。预计持续时间被用来规范文本嵌入和语音嵌入。然后,根据一致的交叉模式输入,我们直接生成了经过编辑的语音的Mel-spectrogram,使用变压器解调器进行工作。主观的倾听测试显示,尽管缺乏针对演讲者的培训数据,但我们的方法还是取得了令人满意的结果。它超越了最近由大差值组成的零发TTS发动机。