Multimodal named entity recognition (MNER) is a critical step in information extraction, which aims to detect entity spans and classify them to corresponding entity types given a sentence-image pair. Existing methods either (1) obtain named entities with coarse-grained visual clues from attention mechanisms, or (2) first detect fine-grained visual regions with toolkits and then recognize named entities. However, they suffer from improper alignment between entity types and visual regions or error propagation in the two-stage manner, which finally imports irrelevant visual information into texts. In this paper, we propose a novel end-to-end framework named MNER-QG that can simultaneously perform MRC-based multimodal named entity recognition and query grounding. Specifically, with the assistance of queries, MNER-QG can provide prior knowledge of entity types and visual regions, and further enhance representations of both texts and images. To conduct the query grounding task, we provide manual annotations and weak supervisions that are obtained via training a highly flexible visual grounding model with transfer learning. We conduct extensive experiments on two public MNER datasets, Twitter2015 and Twitter2017. Experimental results show that MNER-QG outperforms the current state-of-the-art models on the MNER task, and also improves the query grounding performance.
翻译:现有方法:(1) 从关注机制中获得粗微可见线索的指定实体,或者(2) 首先用工具包探测细微可见区域,然后识别名称实体;然而,它们因实体类型和视觉区域之间不适当协调或以两阶段方式传播错误而受到影响,最终将无关的视觉信息输入文本。在本文件中,我们提议了一个名为MNER-QG的新颖端到端框架,可以同时进行基于MRC的多式联运名称实体识别和查询。具体地说,在查询的协助下,MNER-QG可以提供实体类型和视觉区域的先前知识,并进一步加强文本和图像的表述。为开展查询任务,我们提供人工说明和薄弱的监督,通过培训高度灵活的视觉定位模型和转让学习。我们就两个公共的MNER数据集、TF2015和TF2017进行了广泛的实验。实验结果显示,MNER-G还改进了当前地面模型的绩效。