Many applications require grouping instances contained in diverse document datasets into classes. Most widely used methods do not employ deep learning and do not exploit the inherently multimodal nature of documents. Notably, record linkage is typically conceptualized as a string-matching problem. This study develops CLIPPINGS, (Contrastively Linking Pooled Pre-trained Embeddings), a multimodal framework for record linkage. CLIPPINGS employs end-to-end training of symmetric vision and language bi-encoders, aligned through contrastive language-image pre-training, to learn a metric space where the pooled image-text representation for a given instance is close to representations in the same class and distant from representations in different classes. At inference time, instances can be linked by retrieving their nearest neighbor from an offline exemplar embedding index or by clustering their representations. The study examines two challenging applications: constructing comprehensive supply chains for mid-20th century Japan through linking firm level financial records - with each firm name represented by its crop in the document image and the corresponding OCR - and detecting which image-caption pairs in a massive corpus of historical U.S. newspapers came from the same underlying photo wire source. CLIPPINGS outperforms widely used string matching methods by a wide margin and also outperforms unimodal methods. Moreover, a purely self-supervised model trained on only image-OCR pairs also outperforms popular string-matching methods without requiring any labels.
翻译:许多应用程序需要将包含在不同文档数据集中的实例分组为类别。大多数广泛使用的方法不使用深度学习,也不利用文档的多模态特性。值得注意的是,记录链接通常被概念化为字符串匹配问题。本研究开发了CLIPPINGS(Contrastively Linking Pooled Pre-trained Embeddings),这是一个基于多模态的记录链接框架。CLIPPINGS利用两个对称的视觉和语言双编码器的端到端训练,通过对比语言-图像预训练对齐,学习了一个度量空间,其中给定实例的汇集图像文本表示接近同一类别中的表示并远离不同类别中的表示。在推理时间,可以通过从离线样本嵌入索引中检索其最近邻居或通过聚类它们的表示来链接实例。研究了两个具有挑战性的应用程序:通过链接公司级财务记录来构建20世纪中期日本的全面供应链 - 其中每个公司名称用文档图像中的所种作物及相应的OCR表示,以及检测大量历史美国报纸中的哪些图像标题对来自同一潜在照片新闻来源。CLIPPINGS的性能比广泛使用的字符串匹配方法高出很多,并且也优于单模态方法。此外,仅通过图像-OCR对进行训练的纯自监督模型也比通用的字符串匹配方法表现更好,而不需要任何标签。