The advent of deep neural networks (DNN) has significantly improved the performance of monaural speech enhancement (SE). Most of them attempt to implicitly capture the structural features of speech through distribution approximation. However, existing methods are susceptible to the issues of degraded speech and residual noise. This letter is grounded in the Information Bottleneck as an anchor to rethink the SE. By defining the incremental convergence of mutual information between speech characteristics, we elucidate that the acoustic characteristic of speech is crucial in alleviating the above issues, for its explicit introduction contributes to further approximating the optimal information-theoretic upper bound of the optimization. Referring to the chain rule of entropy, we also propose a framework to reconstruct the information composition of the optimization objective, aiming to integrate and refine this underlying characteristic without loss of generality. The visualization reflects consistency with analysis using information theory. Experimental results show that with only 1.18 M additional parameters, the refined CRN has yielded substantial progress over a number of advanced methods. The source code is available at https://github.com/caoruitju/RUI_SE.
翻译:暂无翻译