The present paper is about the participation of our team "techno" on CERIST'22 shared tasks. We used an available dataset "task1.c" related to covid-19 pandemic. It comprises 4128 tweets for sentiment analysis task and 8661 tweets for fake news detection task. We used natural language processing tools with the combination of the most renowned pre-trained language models BERT (Bidirectional Encoder Representations from Transformers). The results shows the efficacy of pre-trained language models as we attained an accuracy of 0.93 for the sentiment analysis task and 0.90 for the fake news detection task.
翻译:本文介绍我们 "techno" 团队在 CERIST'22 分享任务中参与的相关工作。我们使用了一个与 COVID-19 大流行有关的可用数据集 "task1.c"。它包括了4128条推文用于情感分析任务和8661条推文用于假新闻检测任务。我们使用了自然语言处理工具和最著名的预训练语言模型 BERT(双向编码器变形器)。结果表明,预训练语言模型的有效性,我们在情感分析任务上获得了0.93的准确度,在假新闻检测任务上获得了近0.90的准确度。