We introduce FakeParts, a new class of deepfakes characterized by subtle, localized manipulations to specific spatial regions or temporal segments of otherwise authentic videos. Unlike fully synthetic content, these partial manipulations - ranging from altered facial expressions to object substitutions and background modifications - blend seamlessly with real elements, making them particularly deceptive and difficult to detect. To address the critical gap in detection, we present FakePartsBench, the first large-scale benchmark specifically designed to capture the full spectrum of partial deepfakes. Comprising over 81K (including 44K FakeParts) videos with pixel- and frame-level manipulation annotations, our dataset enables comprehensive evaluation of detection methods. Our user studies demonstrate that FakeParts reduces human detection accuracy by up to 26% compared to traditional deepfakes, with similar performance degradation observed in state-of-the-art detection models. This work identifies an urgent vulnerability in current detectors and provides the necessary resources to develop methods robust to partial manipulations.
翻译:本文提出FakeParts——一种新型深度伪造技术,其特点在于对真实视频中特定空间区域或时间片段进行精细的局部篡改。与完全合成内容不同,这类局部篡改(包括面部表情调整、物体替换及背景修改等)能够与真实元素无缝融合,因而具有极强的欺骗性和检测难度。为填补该领域检测技术的空白,我们构建了首个针对局部深度伪造全谱系的大规模基准测试集FakePartsBench。该数据集包含超过81K视频(含44K FakeParts样本),并提供像素级与帧级篡改标注,为检测方法的全面评估提供了基础。用户研究表明,相较于传统深度伪造,FakeParts使人类检测准确率下降达26%,现有先进检测模型也表现出相似的性能退化。本工作揭示了当前检测器存在的紧迫缺陷,并为开发抗局部篡改的鲁棒检测方法提供了必要资源。