This paper presents a new convolutional neural network (CNN) architecture for audio pattern recognition tasks. We propose a novel technique for reducing the computational complexity of models and introduce the corresponding hyper-parameter in CNN architecture. Using optimal values of this parameter, we also can save or even increase the performance of models. This observation can be confirmed by experiments on three datasets: the AudioSet dataset, the ESC-50 dataset, and RAVDESS. Our best model achieves an mAP of 0.450 on the AudioSet dataset, which is less than the performance of the state-of-the-art model, but our model is 7.1x faster and 9.7x smaller in parameter size. On the ESC-50 dataset and RAVDESS, we obtain state-of-the-art results with accuracies of 0.961 and 0.748, respectively. Our best model for the ESC-50 dataset is 1.7x faster and 2.3x smaller than the previous best model. For RAVDESS, our best model is 3.3x smaller than the state-of-the-art model. We call our models ''ERANNs'' (Efficient Residual Audio Neural Networks).
翻译:本文展示了用于音频模式识别任务的新型进化神经网络(CNN)结构。 我们提出了减少模型计算复杂性的新技术,并在CNN结构中引入了相应的超参数。 使用该参数的最佳值, 我们还可以保存甚至提高模型的性能。 可以通过三个数据集的实验来证实这一观测: 音频Set数据集、 ESC- 50 数据集和 RAVDESS。 我们的最佳模型在音频Set 数据集上达到0. 450 的 mAP, 低于最新模型的性能, 但是我们的模型比参数大小小7.1x快9. 7x。 在 ESC- 50 数据集和 RAVDESS 中, 我们获得最新技术结果, 其精度分别为 0. 961 和 0. 748 。 我们的 ESC- 50 数据集的最佳模型比先前的最佳模型快1.7x 和 2.3x 更小。 对于RAVDESS, 我们的最佳模型比状态神经模型小3.3x 。 我们称我们模型“ Qal-ER’ AS's ASyal ANNER's's's ASyal- Nalffffyalnetworks.