Existing contrastive learning methods for anomalous sound detection refine the audio representation of each audio sample by using the contrast between the samples' augmentations (e.g., with time or frequency masking). However, they might be biased by the augmented data, due to the lack of physical properties of machine sound, thereby limiting the detection performance. This paper uses contrastive learning to refine audio representations for each machine ID, rather than for each audio sample. The proposed two-stage method uses contrastive learning to pretrain the audio representation model by incorporating machine ID and a self-supervised ID classifier to fine-tune the learnt model, while enhancing the relation between audio features from the same ID. Experiments show that our method outperforms the state-of-the-art methods using contrastive learning or self-supervised classification in overall anomaly detection performance and stability on DCASE 2020 Challenge Task2 dataset.
翻译:现有的对比学习方法用于异常声音检测时,通过采用样本的增强对比(例如,使用时间或频率掩蔽)来优化每个音频样本的音频表征。然而,由于缺乏机器声音的物理属性,它们可能会被增强的数据所偏置,从而限制检测性能。本文提出了使用对比学习来优化每个机器ID的音频表征而不是优化每个音频样本的方法。提出的两阶段方法利用对比学习来预训练音频表征模型,通过使用自监督ID分类器来微调学习模型的同时增强来自同一ID的音频特征之间的关系。实验结果表明,我们的方法在DCASE 2020 Challenge Task2数据集上的整体异常检测性能和稳定性方面优于使用对比学习或自监督分类的最先进方法。