This paper introduces a novel speech enhancement (SE) approach based on a denoising diffusion probabilistic model (DDPM), termed Guided diffusion for speech enhancement (GDiffuSE). In contrast to conventional methods that directly map noisy speech to clean speech, our method employs a lightweight helper model to estimate the noise distribution, which is then incorporated into the diffusion denoising process via a guidance mechanism. This design improves robustness by enabling seamless adaptation to unseen noise types and by leveraging large-scale DDPMs originally trained for speech generation in the context of SE. We evaluate our approach on noisy signals obtained by adding noise samples from the BBC sound effects database to LibriSpeech utterances, showing consistent improvements over state-of-the-art baselines under mismatched noise conditions. Examples are available at our project webpage.
翻译:本文提出了一种基于去噪扩散概率模型(DDPM)的新型语音增强(SE)方法,称为引导扩散语音增强(GDiffuSE)。与直接将带噪语音映射到纯净语音的传统方法不同,本方法采用轻量级辅助模型来估计噪声分布,随后通过引导机制将其整合到扩散去噪过程中。该设计通过实现对未知噪声类型的无缝适应,并利用原本为语音生成任务训练的大规模DDPM模型进行语音增强,从而提升了系统的鲁棒性。我们在LibriSpeech语音数据叠加BBC音效库噪声样本构成的带噪信号上评估了本方法,结果表明在噪声类型不匹配的条件下,本方法相较于现有先进基线模型取得了持续的性能提升。示例可在我们的项目网页中获取。