In this paper, we present the Logically submissions to De-Factify 2 challenge (DE-FACTIFY 2023) on the task 1 of Multi-Modal Fact Checking. We describes our submissions to this challenge including explored evidence retrieval and selection techniques, pre-trained cross-modal and unimodal models, and a cross-modal veracity model based on the well established Transformer Encoder (TE) architecture which is heavily relies on the concept of self-attention. Exploratory analysis is also conducted on this Factify 2 data set that uncovers the salient multi-modal patterns and hypothesis motivating the architecture proposed in this work. A series of preliminary experiments were done to investigate and benchmarking different pre-trained embedding models, evidence retrieval settings and thresholds. The final system, a standard two-stage evidence based veracity detection system, yields weighted avg. 0.79 on both val set and final blind test set on the task 1, which achieves 3rd place with a small margin to the top performing system on the leaderboard among 9 participants.
翻译:在本文中,我们介绍了关于多模式事实核实任务1的“解除现实2挑战”(DE-FACTIFY 2023)的逻辑文件,我们描述了我们为这一挑战提交的文件,包括探索证据检索和选择技术、预先训练的跨模式和单一模式模型,以及基于高度依赖自我注意概念的成熟的变压器编码器结构的跨模式真实性模型。还对这一“事实”2数据集进行了探索性分析,该数据集揭示了在这项工作中提议的架构的突出的多模式和假设。进行了一系列初步试验,以调查和确定各种预先训练的嵌入模型、证据检索设置和阈值。最后系统,即基于真实性探测系统的两阶段标准证据,产生加权电压值0.79,以及任务1的最后盲式测试,在9名参与者中,在最高演算系统上第3位,有少量余地。