This paper studies the multimodal named entity recognition (MNER) and multimodal relation extraction (MRE), which are important for multimedia social platform analysis. The core of MNER and MRE lies in incorporating evident visual information to enhance textual semantics, where two issues inherently demand investigations. The first issue is modality-noise, where the task-irrelevant information in each modality may be noises misleading the task prediction. The second issue is modality-gap, where representations from different modalities are inconsistent, preventing from building the semantic alignment between the text and image. To address these issues, we propose a novel method for MNER and MRE by Multi-Modal representation learning with Information Bottleneck (MMIB). For the first issue, a refinement-regularizer probes the information-bottleneck principle to balance the predictive evidence and noisy information, yielding expressive representations for prediction. For the second issue, an alignment-regularizer is proposed, where a mutual information-based item works in a contrastive manner to regularize the consistent text-image representations. To our best knowledge, we are the first to explore variational IB estimation for MNER and MRE. Experiments show that MMIB achieves the state-of-the-art performances on three public benchmarks.
翻译:本文研究了多模态命名实体识别(MNER)和多模态关系提取(MRE),这对于多媒体社交平台分析非常重要。MNER和MRE的核心在于将明显的视觉信息与文本语义相结合,从而增强任务的预测效果,其中存在两个 inherently 需要研究的问题。第一个问题是模态噪声,每种模态中的任务无关信息可能会误导任务预测。第二个问题是模态差距,表示来自不同模态的表示不一致,难以建立文本和图像之间的语义对齐。为了解决这些问题,本文提出了一种新颖的多模态表示学习方法,基于信息瓶颈的多模态增强模型(MMIB),用于 MNER和MRE。对于第一个问题,我们引入了一种 refinement-regularizer,并运用信息瓶颈原则平衡预测证据和噪声信息,从而生成表达力强的预测表示。对于第二个问题,我们提出了一种 alignment-regularizer,利用基于互信息的项进行对比,以规范一致的文本-图像表示。我们认为本文是第一个在MNER和MRE领域探索变分信息瓶颈估计的研究。实验结果表明,MMIB在三个公共基准测试中实现了最先进的性能。