Many applications require grouping instances contained in diverse document datasets into classes. Most widely used methods do not employ deep learning and do not exploit the inherently multimodal nature of documents. Notably, record linkage is typically conceptualized as a string-matching problem. This study develops CLIPPINGS, (Contrastively Linking Pooled Pre-trained Embeddings), a multimodal framework for record linkage. CLIPPINGS employs end-to-end training of symmetric vision and language bi-encoders, aligned through contrastive language-image pre-training, to learn a metric space where the pooled image-text representation for a given instance is close to representations in the same class and distant from representations in different classes. At inference time, instances can be linked by retrieving their nearest neighbor from an offline exemplar embedding index or by clustering their representations. The study examines two challenging applications: constructing comprehensive supply chains for mid-20th century Japan through linking firm level financial records - with each firm name represented by its crop in the document image and the corresponding OCR - and detecting which image-caption pairs in a massive corpus of historical U.S. newspapers came from the same underlying photo wire source. CLIPPINGS outperforms widely used string matching methods by a wide margin and also outperforms unimodal methods. Moreover, a purely self-supervised model trained on only image-OCR pairs also outperforms popular string-matching methods without requiring any labels.
翻译:许多应用程序需要将包含在不同文档数据集中的实例分组为类别。大多数广泛使用的方法不使用深度学习,也不利用文档天然的多模态性。特别是,记录链接通常被概念化为字符串匹配问题。本研究开发了一种名为 CLIPPINGS(Contrastively Linking Pooled Pre-trained Embeddings)的多模态框架用于记录链接。CLIPPINGS利用对比语言 - 图像预训练方法,对称视觉和语言双编码器的端到端训练,来学习度量空间,其中给定实例的图像文本嵌入池接近同一类别中的表示,并且远离不同类别的表示。在推断时间,可以通过检索离线样本嵌入索引中的最近邻或通过聚类其表示来链接实例。本研究探讨了两个具有挑战性的应用程序:通过链接企业级财务记录构建20世纪中叶日本的综合供应链-每个公司名称由其文档图像中的作物和相应的OCR表示-以及检测在大量的历史美国报纸语料库中哪些图像标题对来自相同的底层照片传输源。 CLIPPINGS的表现比广泛使用的字符串匹配方法高出很多,并且也优于单模态方法。此外,仅使用图像-OCR对进行训练的纯自监督模型也比不需要任何标签的流行字符串匹配方法表现更好。