The rapid growth of speech synthesis and voice conversion systems has made deepfake audio a major security concern. Bengali deepfake detection remains largely unexplored. In this work, we study automatic detection of Bengali audio deepfakes using the BanglaFake dataset. We evaluate zeroshot inference with several pretrained models. These include Wav2Vec2-XLSR-53, Whisper, PANNsCNN14, WavLM and Audio Spectrogram Transformer. Zero-shot results show limited detection ability. The best model, Wav2Vec2-XLSR-53, achieves 53.80% accuracy, 56.60% AUC and 46.20% EER. We then f ine-tune multiple architectures for Bengali deepfake detection. These include Wav2Vec2-Base, LCNN, LCNN-Attention, ResNet18, ViT-B16 and CNN-BiLSTM. Fine-tuned models show strong performance gains. ResNet18 achieves the highest accuracy of 79.17%, F1 score of 79.12%, AUC of 84.37% and EER of 24.35%. Experimental results confirm that fine-tuning significantly improves performance over zero-shot inference. This study provides the first systematic benchmark of Bengali deepfake audio detection. It highlights the effectiveness of f ine-tuned deep learning models for this low-resource language.
翻译:语音合成与语音转换系统的快速发展使得深度伪造音频成为主要的安全隐患。孟加拉语深度伪造检测领域仍存在大量研究空白。本研究利用BanglaFake数据集,对孟加拉语深度伪造音频的自动检测展开探索。我们评估了多种预训练模型的零样本推理能力,包括Wav2Vec2-XLSR-53、Whisper、PANNsCNN14、WavLM及Audio Spectrogram Transformer。零样本结果显示检测能力有限,最佳模型Wav2Vec2-XLSR-53仅达到53.80%准确率、56.60% AUC及46.20% EER。随后,我们对多种架构进行微调以适应孟加拉语深度伪造检测任务,涵盖Wav2Vec2-Base、LCNN、LCNN-Attention、ResNet18、ViT-B16及CNN-BiLSTM模型。微调后的模型展现出显著的性能提升,其中ResNet18取得最优结果:准确率79.17%、F1分数79.12%、AUC 84.37%、EER 24.35%。实验结果证实微调策略相比零样本推理能显著提升检测性能。本研究首次为孟加拉语深度伪造音频检测建立了系统性基准,凸显了微调深度学习模型在这一低资源语言任务中的有效性。