Speech super-resolution (SR) reconstructs high-fidelity wideband speech from low-resolution inputs-a task that necessitates reconciling global harmonic coherence with local transient sharpness. While diffusion-based generative models yield impressive fidelity, their practical deployment is often stymied by prohibitive computational demands. Conversely, efficient time-domain architectures lack the explicit frequency representations essential for capturing long-range spectral dependencies and ensuring precise harmonic alignment. We introduce STSR, a unified end-to-end framework formulated in the MDCT domain to circumvent these limitations. STSR employs a Spectral-Contextual Attention mechanism that harnesses hierarchical windowing to adaptively aggregate non-local spectral context, enabling consistent harmonic reconstruction up to 48 kHz. Concurrently, a sparse-aware regularization strategy is employed to mitigate the suppression of transient components inherent in compressed spectral representations. STSR consistently outperforms state-of-the-art baselines in both perceptual fidelity and zero-shot generalization, providing a robust, real-time paradigm for high-quality speech restoration.
翻译:语音超分辨率任务旨在从低分辨率输入中重建高保真宽带语音,这需要协调全局谐波连贯性与局部瞬态锐度。尽管基于扩散的生成模型能实现令人印象深刻的保真度,但其实际部署常因过高的计算需求而受阻。相反,高效的时域架构缺乏明确的频域表示,而这种表示对于捕获长程频谱依赖关系及确保精确谐波对齐至关重要。我们提出STSR——一个在MDCT域构建的统一端到端框架以克服这些局限。STSR采用频谱上下文注意力机制,该机制利用分层加窗策略自适应聚合非局部频谱上下文,从而支持高达48 kHz的一致谐波重建。同时,我们采用稀疏感知正则化策略以缓解压缩频谱表示中固有的瞬态成分抑制问题。STSR在感知保真度与零样本泛化能力上均持续超越现有先进基线,为高质量语音修复提供了一个鲁棒且实时的解决方案。