Consumer-grade music recordings such as those captured by mobile devices typically contain distortions in the form of background noise, reverb, and microphone-induced EQ. This paper presents a deep learning approach to enhance low-quality music recordings by combining (i) an image-to-image translation model for manipulating audio in its mel-spectrogram representation and (ii) a music vocoding model for mapping synthetically generated mel-spectrograms to perceptually realistic waveforms. We find that this approach to music enhancement outperforms baselines which use classical methods for mel-spectrogram inversion and an end-to-end approach directly mapping noisy waveforms to clean waveforms. Additionally, in evaluating the proposed method with a listening test, we analyze the reliability of common audio enhancement evaluation metrics when used in the music domain.
翻译:消费者级音乐录音,例如移动设备所摄取的音乐录音,通常含有背景噪音、反动和麦克风引发的EQ等形式的扭曲。本文介绍了一种深层次的学习方法,通过结合(一) 图像到图像的翻译模型,在Mel-spectrologram 中调控音频,以及(二) 将合成生成的Mel-spectrogram 绘制成感知现实的波形的音乐vocodrogram。我们发现,这一方法对音乐增强超过基线的表达方式,即使用古典方法进行Mel-spectrography 转换和直接绘制噪音波形图以清除波形的端对端方法。此外,在用监听测试来评价拟议方法时,我们分析了用于音乐领域的通用音频增强评价指标的可靠性。