COVID-19 impacted every part of the world, although the misinformation about the outbreak traveled faster than the virus. Misinformation spread through online social networks (OSN) often misled people from following correct medical practices. In particular, OSN bots have been a primary source of disseminating false information and initiating cyber propaganda. Existing work neglects the presence of bots that act as a catalyst in the spread and focuses on fake news detection in 'articles shared in posts' rather than the post (textual) content. Most work on misinformation detection uses manually labeled datasets that are hard to scale for building their predictive models. In this research, we overcome this challenge of data scarcity by proposing an automated approach for labeling data using verified fact-checked statements on a Twitter dataset. In addition, we combine textual features with user-level features (such as followers count and friends count) and tweet-level features (such as number of mentions, hashtags and urls in a tweet) to act as additional indicators to detect misinformation. Moreover, we analyzed the presence of bots in tweets and show that bots change their behavior over time and are most active during the misinformation campaign. We collected 10.22 Million COVID-19 related tweets and used our annotation model to build an extensive and original ground truth dataset for classification purposes. We utilize various machine learning models to accurately detect misinformation and our best classification model achieves precision (82%), recall (96%), and false positive rate (3.58%). Also, our bot analysis indicates that bots generated approximately 10% of misinformation tweets. Our methodology results in substantial exposure of false information, thus improving the trustworthiness of information disseminated through social media platforms.
翻译:COVID-19 影响世界的每个角落,尽管关于疫情爆发的错误信息比病毒的传播速度快。通过在线社交网络(OSN)传播的错误信息往往误导人们遵循正确的医疗做法。特别是,OSN机器人一直是传播虚假信息和发起网络宣传的主要来源。现有工作忽视了机器人的存在,这些机器人在传播中起到催化剂的作用,侧重于假新闻探测“在文章中共享的物品”而不是文章(文字)内容。大多数关于错误检测的工作使用人工标签数据集,而建立预测模型是很难扩大的。在这个研究中,我们通过在推特数据集中提出使用经核实的事实校验的报表标签数据自动方法,克服了数据短缺的挑战。此外,我们把文字功能与用户级特征(如追随者数和朋友数)和推特级特征(例如引用数、标签和推文内容中的批量)结合起来,作为检测错误信息的额外指标。此外,我们分析了在推特中出现的机器人数据披露情况,并显示机器人在时间上改变了他们的行为,因此使用了最积极的实地数据分类方法。我们利用了10-19的原始数据分类。我们用了原始数据来收集的模型,并用了我们10个原始数据。