To improve the detection accuracy and generalization of steganalysis, this paper proposes the Steganalysis Contrastive Framework (SCF) based on contrastive learning. The SCF improves the feature representation of steganalysis by maximizing the distance between features of samples of different categories and minimizing the distance between features of samples of the same category. To decrease the computing complexity of the contrastive loss in supervised learning, we design a novel Steganalysis Contrastive Loss (StegCL) based on the equivalence and transitivity of similarity. The StegCL eliminates the redundant computing in the existing contrastive loss. The experimental results show that the SCF improves the generalization and detection accuracy of existing steganalysis DNNs, and the maximum promotion is 2% and 3% respectively. Without decreasing the detection accuracy, the training time of using the StegCL is 10% of that of using the contrastive loss in supervised learning.
翻译:为提高检测的准确性和总体分析,本文件根据对比性学习,提出“对照分析框架”(SCF),通过尽可能扩大不同类别样本特征之间的距离,并尽可能缩小同一类别样本特征之间的距离,从而改进分层分析的特征表现。为了降低监督学习中对比性损失的计算复杂性,我们根据相似性的等同性和中转性设计了一部新型的对照损失(StegCL),StegCL消除了现有对比性损失中的冗余计算。实验结果表明,SCF改进了现有分层分析 DNS的普及性和检测准确性,最高推广率分别为2%和3%。在不降低检测准确性的前提下,使用StegCL的培训时间是使用监督学习中的对比性损失的10%。