Prior work has extensively studied misinformation related to news, politics, and health, however, misinformation can also be about technological topics. While less controversial, such misinformation can severely impact companies' reputations and revenues, and users' online experiences. Recently, social media has also been increasingly used as a novel source of knowledgebase for extracting timely and relevant security threats, which are fed to the threat intelligence systems for better performance. However, with possible campaigns spreading false security threats, these systems can become vulnerable to poisoning attacks. In this work, we proposed novel approaches for detecting misinformation about cybersecurity and privacy threats on social media, focusing on two topics with different types of misinformation: phishing websites and Zoom's security & privacy threats. We developed a framework for detecting inaccurate phishing claims on Twitter. Using this framework, we could label about 9% of URLs and 22% of phishing reports as misinformation. We also proposed another framework for detecting misinformation related to Zoom's security and privacy threats on multiple platforms. Our classifiers showed great performance with more than 98% accuracy. Employing these classifiers on the posts from Facebook, Instagram, Reddit, and Twitter, we found respectively that about 18%, 3%, 4%, and 3% of posts were misinformation. In addition, we studied the characteristics of misinformation posts, their authors, and their timelines, which helped us identify campaigns.
翻译:先前的工作已经广泛研究了与新闻、政治和健康有关的错误信息,然而,错误信息也可能涉及技术议题。虽然争议较少,但这类错误信息可能会严重影响公司的声誉和收入以及用户的在线经验。最近,社交媒体也越来越多地被用作新的知识库,用于提取及时和相关的安全威胁,这些威胁被反馈到威胁情报系统,以便更好的表现。然而,由于可能开展传播虚假安全威胁的运动,这些系统可能会容易中毒袭击。在这项工作中,我们提出了发现有关网络安全以及社会媒体隐私威胁的错误信息的新办法,重点是两种有不同类型错误信息的主题:网上钓鱼网站以及Zom的安全和隐私威胁。我们开发了一个在推特上检测不准确的网络信息主张的框架。我们利用这个框架可以将大约9%的網址和22%的phish报告标为错误信息,用于更好的表现。我们还提议了另一个框架,用于检测与Zom网站安全和隐私威胁有关的错误信息。我们的分类人员表现超过98 %。我们在脸书、Instagram、Reddit、Twitter和Twitter的3个作者中,我们发现了这些分类文章中的排名第3 %,我们分别研究了他们的身份。