Speech foundation models have significantly advanced various speech-related tasks by providing exceptional representation capabilities. However, their high-dimensional output features often create a mismatch with downstream task models, which typically require lower-dimensional inputs. A common solution is to apply a dimensionality reduction (DR) layer, but this approach increases parameter overhead, computational costs, and risks losing valuable information. To address these issues, we propose Nested Res2Net (Nes2Net), a lightweight back-end architecture designed to directly process high-dimensional features without DR layers. The nested structure enhances multi-scale feature extraction, improves feature interaction, and preserves high-dimensional information. We first validate Nes2Net on CtrSVDD, a singing voice deepfake detection dataset, and report a 22% performance improvement and an 87% back-end computational cost reduction over the state-of-the-art baseline. Additionally, extensive testing across four diverse datasets: ASVspoof 2021, ASVspoof 5, PartialSpoof, and In-the-Wild, covering fully spoofed speech, adversarial attacks, partial spoofing, and real-world scenarios, consistently highlights Nes2Net's superior robustness and generalization capabilities. The code package and pre-trained models are available at https://github.com/Liu-Tianchi/Nes2Net.
翻译:语音基础模型通过提供卓越的表征能力,显著推动了各类语音相关任务的进展。然而,其高维输出特征常与下游任务模型不匹配,后者通常需要低维输入。常见的解决方案是应用降维层,但这种方法会增加参数开销、计算成本,并可能丢失有价值的信息。为解决这些问题,我们提出了嵌套Res2Net(Nes2Net),一种轻量级后端架构,旨在无需降维层直接处理高维特征。该嵌套结构增强了多尺度特征提取,改善了特征交互,并保留了高维信息。我们首先在CtrSVDD(一个歌声深度伪造检测数据集)上验证Nes2Net,报告了相比最先进基线22%的性能提升和87%的后端计算成本降低。此外,在四个多样化数据集上的广泛测试:ASVspoof 2021、ASVspoof 5、PartialSpoof和In-the-Wild,覆盖了完全欺骗语音、对抗攻击、部分欺骗和真实场景,一致凸显了Nes2Net卓越的鲁棒性和泛化能力。代码包和预训练模型可在https://github.com/Liu-Tianchi/Nes2Net获取。