Neural audio codecs (NACs) provide compact latent speech representations in the form of sequences of continuous vectors or discrete tokens. In this work, we investigate how these two types of speech representations compare when used as training targets for supervised speech enhancement. We consider both autoregressive and non-autoregressive speech enhancement models based on the Conformer architecture, as well as a simple baseline where the NAC encoder is simply fine-tuned for speech enhancement. Our experiments reveal three key findings: predicting continuous latent representations consistently outperforms discrete token prediction; autoregressive models achieve higher quality but at the expense of intelligibility and efficiency, making non-autoregressive models more attractive in practice; and encoder fine-tuning yields the strongest enhancement metrics overall, though at the cost of degraded codec reconstruction. The code and audio samples are available online.
翻译:神经音频编解码器(NACs)以连续向量序列或离散标记序列的形式提供紧凑的潜在语音表示。本研究探讨了这两种语音表示在作为监督式语音增强训练目标时的性能对比。我们基于Conformer架构考虑了自回归与非自回归语音增强模型,同时设置了一个简单基线方案:直接对NAC编码器进行语音增强微调。实验揭示三个关键发现:预测连续潜在表示始终优于离散标记预测;自回归模型能获得更高音质,但以牺牲可懂度和效率为代价,使得非自回归模型在实践中更具吸引力;编码器微调在整体增强指标上表现最优,但会导致编解码器重建质量下降。相关代码与音频样本已在线公开。