Recently, semi-supervised learning (SSL) methods, in the framework of deep learning (DL), have been shown to provide state-of-the-art results on image datasets by exploiting unlabeled data. Most of the time tested on object recognition tasks in images, these algorithms are rarely compared when applied to audio tasks. In this article, we adapted four recent SSL methods to the task of audio tagging. The first two methods, namely Deep Co-Training (DCT) and Mean Teacher (MT) involve two collaborative neural networks. The two other algorithms, called MixMatch (MM) and FixMatch (FM), are single-model methods that rely primarily on data augmentation strategies. Using the Wide ResNet 28-2 architecture in all our experiments, 10% of labeled data and the remaining 90\% as unlabeled, we first compare the four methods' accuracy on three standard benchmark audio event datasets: Environmental Sound Classification (ESC-10), UrbanSound8K (UBS8K), and Google Speech Commands (GSC). MM and FM outperformed MT and DCT significantly, MM being the best method in most experiments. On UBS8K and GSC, in particular, MM achieved 18.02% and 3.25% error rates (ER), outperforming models trained with 100% of the available labeled data, which reached 23.29% and 4.94% ER, respectively. Second, we explored the benefits of using the mixup augmentation in the four algorithms. In almost all cases, mixup brought significant gains. For instance, on GSC, FM reached 4.44% and 3.31% ER without and with mixup.
翻译:最近,在深层学习(DL)框架内的半监督学习方法(SSL)显示,通过利用未贴标签的数据,半监督学习方法(SSL)在图像数据数据集中提供了最新的最新结果。在图像对象识别任务中测试的大部分时间里,这些算法在应用到音频任务时很少被比较。在本篇文章中,我们根据音频标记任务调整了四种最新的SSL方法。前两种方法,即深共同培训(DCT)和中层教师(MT),涉及两个合作神经网络。另外两种算法,即MixMatch(MMM)和SixMatch(FMM),是主要依靠数据增强战略的单一模型方法。在我们所有实验中,使用宽的 ResNet 28-2 结构时,这些算法在应用到宽的音频任务时,这些算法在三个标准基准音频数据集中,我们首先比较了四种方法的准确度:环境稳健分类(ESC-10)、 城市Sound8K(UBS-8K) 和谷语音命令(GS-94)(GS-94)(GES-94)(GES-29), 和FMFMFMMT) (MT) (MT) (MT) (MT) (MT) (MT) (4) (MT) (4) (M) (MT) (M) (M) (4) (M) (O) (M) (M) (4) (M) (M) (M) (O) (M) (M) (M) (MT) (O) (MT) (O) (M) (M) (M) (M) (M) (M) (4) (M) (M) (M) (M) (M) (M) (M) (M) (M) (M) (M) (M) (M) (M) (M) (D) (D) (D) (D) (M) (4) (D) (D) (D) (4) (4) (4) (4) (T) (T) (M) (M) (D) (C) (M) (C) (C) (M) (后端差 和D) (