We present ASAP, a new framework for detecting and grounding multi-modal media manipulation (DGM4).Upon thorough examination, we observe that accurate fine-grained cross-modal semantic alignment between the image and text is vital for accurately manipulation detection and grounding. While existing DGM4 methods pay rare attention to the cross-modal alignment, hampering the accuracy of manipulation detecting to step further. To remedy this issue, this work targets to advance the semantic alignment learning to promote this task. Particularly, we utilize the off-the-shelf Multimodal Large-Language Models (MLLMs) and Large Language Models (LLMs) to construct paired image-text pairs, especially for the manipulated instances. Subsequently, a cross-modal alignment learning is performed to enhance the semantic alignment. Besides the explicit auxiliary clues, we further design a Manipulation-Guided Cross Attention (MGCA) to provide implicit guidance for augmenting the manipulation perceiving. With the grounding truth available during training, MGCA encourages the model to concentrate more on manipulated components while downplaying normal ones, enhancing the model's ability to capture manipulations. Extensive experiments are conducted on the DGM4 dataset, the results demonstrate that our model can surpass the comparison method with a clear margin.
翻译:本文提出了ASAP,一种用于检测与定位多模态媒体篡改(DGM4)的新框架。通过深入分析,我们发现图像与文本之间精确的细粒度跨模态语义对齐对于准确实现篡改检测与定位至关重要。然而现有DGM4方法鲜少关注跨模态对齐问题,这阻碍了篡改检测精度进一步提升。为弥补这一缺陷,本研究致力于推进语义对齐学习以促进该任务发展。具体而言,我们利用现成的多模态大语言模型(MLLMs)与大语言模型(LLMs)构建配对的图文数据,特别针对篡改实例进行构建。随后通过跨模态对齐学习来增强语义对齐能力。除显式辅助线索外,我们进一步设计了篡改引导交叉注意力机制(MGCA),为增强篡改感知能力提供隐式指导。借助训练期间可用的真实定位标注,MGCA促使模型更聚焦于篡改区域并弱化正常区域,从而提升模型捕捉篡改特征的能力。在DGM4数据集上进行的大量实验表明,我们的模型能以明显优势超越对比方法。