We propose multi-agent reinforcement learning as a new method for modeling fake news in social networks. This method allows us to model human behavior in social networks both in unaccustomed populations and in populations that have adapted to the presence of fake news. In particular the latter is challenging for existing methods. We find that a fake-news attack is more effective if it targets highly connected people and people with weaker private information. Attacks are more effective when the disinformation is spread across several agents than when the disinformation is concentrated with more intensity on fewer agents. Furthermore, fake news spread less well in balanced networks than in clustered networks. We test a part of our findings in a human-subject experiment. The experimental evidence provides support for the predictions from the model, suggesting that the model is suitable to analyze the spread of fake news in social networks.
翻译:我们提出将多智能体强化学习作为一种新方法,用于建模社交媒体中的虚假新闻传播。该方法使我们能够对社交媒体中两类人群的人类行为进行建模:一类是不熟悉虚假新闻的群体,另一类是已适应虚假新闻存在的群体。特别是后者对现有方法构成了挑战。我们发现,针对高连接度人群和私有信息较弱人群的虚假新闻攻击更为有效。当虚假信息分散在多个智能体之间传播时,其效果优于将虚假信息更集中地施加于较少智能体的情况。此外,虚假新闻在平衡网络中的传播效果弱于在聚类网络中的传播。我们通过人类受试者实验验证了部分研究结果。实验证据支持模型预测,表明该模型适用于分析社交媒体中虚假新闻的传播动态。