The idea that social media platforms like Twitter are inhabited by vast numbers of social bots has become widely accepted in recent years. Social bots are assumed to be automated social media accounts operated by malicious actors with the goal of manipulating public opinion. They are credited with the ability to produce content autonomously and to interact with human users. Social bot activity has been reported in many different political contexts, including the U.S. presidential elections, discussions about migration, climate change, and COVID-19. However, the relevant publications either use crude and questionable heuristics to discriminate between supposed social bots and humans or -- in the vast majority of the cases -- fully rely on the output of automatic bot detection tools, most commonly Botometer. In this paper, we point out a fundamental theoretical flaw in the widely-used study design for estimating the prevalence of social bots. Furthermore, we empirically investigate the validity of peer-reviewed Botometer-based studies by closely and systematically inspecting hundreds of accounts that had been counted as social bots. We were unable to find a single social bot. Instead, we found mostly accounts undoubtedly operated by human users, the vast majority of them using Twitter in an inconspicuous and unremarkable fashion without the slightest traces of automation. We conclude that studies claiming to investigate the prevalence, properties, or influence of social bots based on Botometer have, in reality, just investigated false positives and artifacts of this approach.
翻译:社交机器人被视为由恶意行为者操作的自动社交媒体账户,目的是操纵公共舆论。在本文中,我们指出广泛使用的估算社会机器人流行程度的研究设计中存在根本的理论缺陷。此外,我们通过仔细和系统地检查数以百计的社会机器人为基础的数百个账户,对经同行审查的Botm计研究的有效性进行了实证性调查。 我们无法找到单一的社会机器人。 相反,我们发现,多数情况下,我们通过不真实的、不真实的Twitter数据调查,我们发现,在大多数用户中,我们通过不真实的、不真实的Twitter数据上,我们发现,我们用最微小的、不真实的Twitter数据分析,我们发现,我们用最微小的Twitter数据来对大多数用户进行不真实的、不真实的跟踪。