This paper describes the deepfake audio detection system submitted to the Audio Deep Synthesis Detection (ADD) Challenge Track 3.2 and gives an analysis of score fusion. The proposed system is a score-level fusion of several light convolutional neural network (LCNN) based models. Various front-ends are used as input features, including low-frequency short-time Fourier transform and Constant Q transform. Due to the complex noise and rich synthesis algorithms, it is difficult to obtain the desired performance using the training set directly. Online data augmentation methods effectively improve the robustness of fake audio detection systems. In particular, the reasons for the poor improvement of score fusion are explored through visualization of the score distributions and comparison with score distribution on another dataset. The overfitting of the model to the training set leads to extreme values of the scores and low correlation of the score distributions, which makes score fusion difficult. Fusion with partially fake audio detection system improves system performance further. The submission on track 3.2 obtained the weighted equal error rate (WEER) of 11.04\%, which is one of the best performing systems in the challenge.
翻译:本文介绍了提交给音频深合成检测(ADD)挑战轨道3.2的深假音频探测系统,并对分数聚合进行了分析。拟议系统是几个基于光电神经网络(LCNN)的模型的分级组合。各种前端都用作输入特征,包括低频短时傅里叶变换和常数Q变换。由于复杂的噪音和丰富的合成算法,很难直接利用培训集获得预期的性能。在线数据扩增方法有效地改进了假音频探测系统的稳健性。特别是,通过对分数分布进行可视化和对另一数据集的得分分布进行比较,对得分增加的原因进行了探讨。模型的过度配置导致得分分布的极端值和低关联性,这使得得分分配难于得分。部分假音探测系统加固了系统性能。在轨3.2上提交的数据增强方法有效地提高了假音频探测系统的准误差率。11.04 ⁇ 的加权等误率(WEER)是挑战中最佳的系统之一。