How, in 20 short years, did we go from the promise of the internet to democratize access to knowledge and make the world more understanding and enlightened, to the litany of daily horrors that is today's internet? We are awash in disinformation consisting of lies, conspiracies, and general nonsense, all with real-world implications ranging from horrific humans rights violations to threats to our democracy and global public health. Although the internet is vast, the peddlers of disinformation appear to be more localized. To this end, we describe a domain-level analysis for predicting if a domain is complicit in distributing or amplifying disinformation. This process analyzes the underlying domain content and the hyperlinking connectivity between domains to predict if a domain is peddling in disinformation. These basic insights extend to an analysis of disinformation on Telegram and Twitter. From these insights, we propose that search engines and social-media recommendation algorithms can systematically discover and demote the worst disinformation offenders, returning some trust and sanity to our online communities.
翻译:在短短20年的时间里,我们如何从互联网承诺实现知识的民主化,使世界更加理解和开明,到今天的互联网的日常恐怖的一连串?我们被谎言、阴谋和一般无稽之谈等谎言、阴谋和无稽之谈的虚假信息所洗刷,所有这些都具有现实世界的影响,从可怕的人权侵犯到对我们民主和全球公共卫生的威胁。虽然互联网很广,但虚假信息的兜售者似乎更加本地化。为此,我们描述了一个域级分析,以预测某个域是否参与传播或扩大虚假信息。这一过程分析了一个域内内容和域际之间的超链接,以预测某个域是否在传播虚假信息。这些基本洞察力延伸到对Telegram和Twitter上的虚假信息的分析。我们从这些洞察中建议搜索引擎和社交媒体建议算法可以系统地发现和演示最坏的错误信息罪犯,将一些信任和理智带回我们的在线社区。