DeepFake Audio, unlike DeepFake images and videos, has been relatively less explored from detection perspective, and the solutions which exist for the synthetic speech classification either use complex networks or dont generalize to different varieties of synthetic speech obtained using different generative and optimization-based methods. Through this work, we propose a channel-wise recalibration of features using attention feature fusion for synthetic speech detection and compare its performance against different detection methods including End2End models and Resnet-based models on synthetic speech generated using Text to Speech and Vocoder systems like WaveNet, WaveRNN, Tactotron, and WaveGlow. We also experiment with Squeeze Excitation (SE) blocks in our Resnet models and found that the combination was able to get better performance. In addition to the analysis, we also demonstrate that the combination of Linear frequency cepstral coefficients (LFCC) and Mel Frequency cepstral coefficients (MFCC) using the attentional feature fusion technique creates better input features representations which can help even simpler models generalize well on synthetic speech classification tasks. Our models (Resnet based using feature fusion) trained on Fake or Real (FoR) dataset and were able to achieve 95% test accuracy with the FoR data, and an average of 90% accuracy with samples we generated using different generative models after adapting this framework.
翻译:与DeepFake Audio不同,与DeepFake 图像和视频不同,从探测角度探讨的合成语音分类解决方案相对较少,而合成语音分类的解决方案,要么使用复杂的网络,要么不推广使用不同基因和优化方法获得的不同种类合成语音。通过这项工作,我们建议对功能进行从频道角度的重新校准,利用注意力特征聚合合成语音检测,并将其性能与不同的检测方法进行比较,包括End2End模型和基于Resnet的合成语音模型,这些模型使用文字对语音和Vocoder系统,如WaveNet、WaveRNNNN、Tactotron和WaveGlow。我们还在Resnet模型中试用了Squeeze Exucation(SE)块,发现组合能够取得更好的性能。除了分析之外,我们还表明,使用关注特征聚合技术生成的Enlinearth Cepstral commus 和Mel Cepstrals 系数(MMFCC)的组合组合,这些模型有助于将合成语音分类任务更简单化模型加以推广。我们的模型(基于特性精确度测试了95)的模型,并实现了数据,我们所生成的精确度测试框架。