Most singer identification methods are processed in the frequency domain, which potentially leads to information loss during the spectral transformation. In this paper, instead of the frequency domain, we propose an end-to-end architecture that addresses this problem in the waveform domain. An encoder based on Multi-scale Dilated Convolution Neural Networks (MDCNN) was introduced to generate wave embedding from the raw audio signal. Specifically, dilated convolution layers are used in the proposed method to enlarge the receptive field, aiming to extract song-level features. Furthermore, skip connection in the backbone network integrates the multi-resolution acoustic features learned by the stack of convolution layers. Then, the obtained wave embedding is passed into the following networks for singer identification. In experiments, the proposed method achieves comparable performance on the benchmark dataset of Artist20, which significantly improves related works.
翻译:多数歌手识别方法在频率域中处理,这可能导致光谱转换过程中信息丢失。在本文中,我们提议一个终端到终端结构,而不是频率域,以解决波形域中的问题。引入了一个基于多级分解神经神经网络(MDCNN)的编码器,从原始音频信号中生成波嵌入。具体地说,在拟议方法中使用了变异层,以扩大可接收域,目的是提取歌声级特征。此外,在主干网中跳过连接,将从堆叠的卷叠层中学习的多分辨率声学特征整合起来。随后,获得的波嵌入将传递到下面的歌手识别网络中。在实验中,拟议方法在艺术家20基准数据集上取得了可比的性能,大大改进了相关作品。