In this paper, ensembles of classifiers that exploit several data augmentation techniques and four signal representations for training Convolutional Neural Networks (CNNs) for audio classification are presented and tested on three freely available audio classification datasets: i) bird calls, ii) cat sounds, and iii) the Environmental Sound Classification dataset. The best performing ensembles combining data augmentation techniques with different signal representations are compared and shown to outperform the best methods reported in the literature on these datasets. The approach proposed here obtains state-of-the-art results in the widely used ESC-50 dataset. To the best of our knowledge, this is the most extensive study investigating ensembles of CNNs for audio classification. Results demonstrate not only that CNNs can be trained for audio classification but also that their fusion using different techniques works better than the stand-alone classifiers.
翻译:本文介绍并测试了利用多种数据增强技术和四种信号展示来培训进化神经网络进行音频分类的分类师群,这些分类师群在三种可自由获取的音频分类数据集中进行:(一) 鸟类呼叫,(二) 猫声,(三) 无害环境分类数据集。将数据增强技术与不同信号表达法相结合的最佳组合进行了比较,并显示其优于这些数据集文献中报告的最佳方法。此处提议的方法在广泛使用的 ESC-50数据集中取得了最新结果。据我们所知,这是对CNN音频音频分类集合进行最广泛的调查研究,其结果不仅表明CNN能够接受音频分类培训,而且显示其使用不同技术的集成比独立分类法效果更好。