For image forensics, convolutional neural networks (CNNs) tend to learn content features rather than subtle manipulation traces, which limits forensic performance. Existing methods predominantly solve the above challenges by following a general pipeline, that is, subtracting the original pixel value from the predicted pixel value to make CNNs pay attention to the manipulation traces. However, due to the complicated learning mechanism, these methods may bring some unnecessary performance losses. In this work, we rethink the advantages of gradient operator in exposing face forgery, and design two plug-and-play modules by combining gradient operator with CNNs, namely tensor pre-processing (TP) and manipulation trace attention (MTA) module. Specifically, TP module refines the feature tensor of each channel in the network by gradient operator to highlight the manipulation traces and improve the feature representation. Moreover, MTA module considers two dimensions, namely channel and manipulation traces, to force the network to learn the distribution of manipulation traces. These two modules can be seamlessly integrated into CNNs for end-to-end training. Experiments show that the proposed network achieves better results than prior works on five public datasets. Especially, TP module greatly improves the accuracy by at least 4.60% compared with the existing pre-processing module only via simple tensor refinement. The code is available at: https://github.com/EricGzq/GocNet-pytorch.
翻译:对于图像法证而言, convolutional神经网络(CNNs)往往学习内容特征,而不是微妙的操纵痕迹,从而限制法证性能。现有方法主要通过遵循一般管道来解决上述挑战,即从预测的像素值中减去原始像素值,使CNN注意到操纵的痕迹。然而,由于复杂的学习机制,这些方法可能会带来一些不必要的性能损失。在这项工作中,我们重新思考梯度操作员暴露面部伪造的优势,并通过将梯度操作员与CNN操作跟踪关注模块(即高压预处理(TP)和操纵跟踪模块(MTA)合并来设计两个插件模块。具体地说,TP模块通过梯度操作操作操作操作员对网络中每个频道的特性喇叭进行精细化,以突出操纵的痕迹并改进功能表示。此外,MTARTA模块考虑两个层面,即频道和操纵痕迹,以迫使网络学习操纵痕迹的分布。这两个模块可以在端对端端/端训练中顺利地融入CNNSMIS。实验显示,拟议的网络比先前的5个公共数据处理/rent-rqmal处理模块取得更好的结果。特别是G/TTP的精确度。G/troqmex 。通过现有的40TP模版模块改进后,只能改进了现有的40。