We learn a visual representation that captures information about the camera that recorded a given photo. To do this, we train a multimodal embedding between image patches and the EXIF metadata that cameras automatically insert into image files. Our model represents this metadata by simply converting it to text and then processing it with a transformer. The features that we learn significantly outperform other self-supervised and supervised features on downstream image forensics and calibration tasks. In particular, we successfully localize spliced image regions "zero shot" by clustering the visual embeddings for all of the patches within an image.
翻译:我们学习了一种视觉演示, 以捕捉拍摄给定照片的相机信息。 为此, 我们训练了一种多式嵌入图像补丁和 EXIF 元数据之间的多式嵌入, 相机自动插入到图像文件中。 我们的模型只是将元数据转换成文本, 然后用变压器处理它。 我们学到的特征大大优于下游图像校验和校准任务上的其他自监控和监管功能。 特别是, 我们成功地将图像中所有补丁的视觉嵌入组合成“ 零镜头 ” 。