There are many deterministic mathematical operations (e.g. compression, clipping, downsampling) that degrade speech quality considerably. In this paper we introduce a neural network architecture, based on a modification of the DiffWave model, that aims to restore the original speech signal. DiffWave, a recently published diffusion-based vocoder, has shown state-of-the-art synthesized speech quality and relatively shorter waveform generation times, with only a small set of parameters. We replace the mel-spectrum upsampler in DiffWave with a deep CNN upsampler, which is trained to alter the degraded speech mel-spectrum to match that of the original speech. The model is trained using the original speech waveform, but conditioned on the degraded speech mel-spectrum. Post-training, only the degraded mel-spectrum is used as input and the model generates an estimate of the original speech. Our model results in improved speech quality (original DiffWave model as baseline) on several different experiments. These include improving the quality of speech degraded by LPC-10 compression, AMR-NB compression, and signal clipping. Compared to the original DiffWave architecture, our scheme achieves better performance on several objective perceptual metrics and in subjective comparisons. Improvements over baseline are further amplified in a out-of-corpus evaluation setting.
翻译:有许多决定性的数学操作(例如压缩、剪剪、下取样),使语言质量大幅下降。在本文中,我们引入了一个神经网络结构,其基础是修改DiffWave模型,目的是恢复最初的语音信号。DiffWave是最近出版的传播基础vocoder,它展示了最先进的综合语音质量和相对较短的波形生成时间,只有一小套参数。我们用一个深层次的CNN上层扫描器取代了DiffWave的Mel-spectrum upsampler。我们用一个深层CNN上层的CNN上层扫描器来取代Diff-spampler,该软件经过培训,以改变退化的语音信号网络结构为基础,该模型使用原有的语音波形来进行训练,但以退化的语音图像光谱为条件。 后期培训仅使用退化的 mel-spectrum作为投入,该模型产生对原始演讲的进一步估计。我们在多个不同实验中改进的语音质量(原版Diff-Wave模型作为基线),这些实验包括改进原版的语音结构质量,由LPC 10MRBSBSBS 实现。