One of the most pressing societal issues is the fight against false news. The false claims, as difficult as they are to expose, create a lot of damage. To tackle the problem, fact verification becomes crucial and thus has been a topic of interest among diverse research communities. Using only the textual form of data we propose our solution to the problem and achieve competitive results with other approaches. We present our solution based on two approaches - PLM (pre-trained language model) based method and Prompt based method. The PLM-based approach uses the traditional supervised learning, where the model is trained to take 'x' as input and output prediction 'y' as P(y|x). Whereas, Prompt-based learning reflects the idea to design input to fit the model such that the original objective may be re-framed as a problem of (masked) language modeling. We may further stimulate the rich knowledge provided by PLMs to better serve downstream tasks by employing extra prompts to fine-tune PLMs. Our experiments showed that the proposed method performs better than just fine-tuning PLMs. We achieved an F1 score of 0.6946 on the FACTIFY dataset and a 7th position on the competition leader-board.
翻译:最紧迫的社会问题之一是打击假新闻。假的主张,尽管难以揭露,却造成了大量损害。为了解决这个问题,事实核查变得至关重要,因此是不同研究界感兴趣的一个专题。我们只使用文本形式的数据来提出解决问题的办法,并用其他方法取得竞争性的结果。我们提出我们基于两种方法的解决方案——PLM(预先培训的语言模式)基于方法和快速方法。基于PLM的方法使用传统的监督学习方法,即模型经过培训,将“x”作为输入和输出预测“yy”作为P(y ⁇ x)进行。而基于迅速的学习则反映了设计投入的想法,以适应模型,使原始目标可以被重新确定为(伪造的)语言模型问题。我们可能进一步鼓励PLMS提供的丰富知识,通过使用额外提示来微调PLMS来更好地为下游任务服务。我们的实验表明,拟议的方法比仅仅微调PMS(PMs)要好。我们在FACTIFTIFD数据集上取得了F1分0.6946和7号上的竞争领先位置。