Audio Spectrogram Transformer models rule the field of Audio Tagging, outrunning previously dominating Convolutional Neural Networks (CNNs). Their superiority is based on the ability to scale up and exploit large-scale datasets such as AudioSet. However, Transformers are demanding in terms of model size and computational requirements compared to CNNs. We propose a training procedure for efficient CNNs based on offline Knowledge Distillation (KD) from high-performing yet complex transformers. The proposed training schema and the efficient CNN design based on MobileNetV3 results in models outperforming previous solutions in terms of parameter and computational efficiency and prediction performance. We provide models of different complexity levels, scaling from low-complexity models up to a new state-of-the-art performance of .483 mAP on AudioSet. Source Code available at: https://github.com/fschmid56/EfficientAT
翻译:音频谱变换模型控制了音频拖网领域,超过了以前主导着革命神经网络的超常功能。其优势在于能够扩大和利用大规模数据集,如音频Set。然而,变换器在模型规模和计算要求方面比CNN要求高。我们建议对高效CNN进行一项培训程序,其依据是高性能、但复杂的变压器的离线知识蒸馏(KD),拟议的培训模式和基于移动网络V3的高效CNN设计,其结果在参数、计算效率和预测性能方面优于以往的解决方案。我们提供了不同复杂程度的模型,从低兼容性模型到483 mAP在音频Set上的新状态性能。源代码见:https://github.com/fschmid56/Effificat。