Alignment between image and text has shown promising improvements on patch-level pre-trained document image models. However, investigating more effective or finer-grained alignment techniques during pre-training requires a large amount of computation cost and time. Thus, a question naturally arises: Could we fine-tune the pre-trained models adaptive to downstream tasks with alignment objectives and achieve comparable or better performance? In this paper, we propose a new model architecture with alignment-enriched tuning (dubbed AETNet) upon pre-trained document image models, to adapt downstream tasks with the joint task-specific supervised and alignment-aware contrastive objective. Specifically, we introduce an extra visual transformer as the alignment-ware image encoder and an extra text transformer as the alignment-ware text encoder before multimodal fusion. We consider alignment in the following three aspects: 1) document-level alignment by leveraging the cross-modal and intra-modal contrastive loss; 2) global-local alignment for modeling localized and structural information in document images; and 3) local-level alignment for more accurate patch-level information. Experiments on various downstream tasks show that AETNet can achieve state-of-the-art performance on various downstream tasks. Notably, AETNet consistently outperforms state-of-the-art pre-trained models, such as LayoutLMv3 with fine-tuning techniques, on three different downstream tasks.
翻译:图像和文本之间的对齐显示,在经过事先培训的补丁级文件图像模型上,图像和文本之间的对齐显示出了大有希望的改进。然而,在培训前调查更有效或更精细的对齐技术需要大量的计算成本和时间。因此,自然会出现一个问题:我们能否将经过预先培训的模型与下游任务相适应,使其与调整目标相适应,并实现可比较或更好的性能?在本文件中,我们提议一个新的模型结构,在经过培训之前,在经过培训的文件图像模型上,采用经过调整的强化调整(dubbbbed AETNet),以适应下游任务,与联合任务具体监管和对齐的对比目标相适应下游任务。具体而言,我们引入了额外的视觉变异器,作为校准软件图像编码器和额外的文本变异器,作为多式联运之前的校准软件编码器。 我们考虑在以下三个方面保持一致:(1) 通过利用跨模式和内部对比损失,使文件图像中的地方调整(dbbbed AETNet)对本地信息进行建模;(3) 地方一级调整,以更精确的对齐信息。在各种下游任务上,对齐实验显示,AET网络前的高级任务可以持续地,在进行升级的升级的升级。