Companies, organizations, and governments across the world are eager to employ so-called 'AI' (artificial intelligence) technology in a broad range of different products and systems. The promise of this cause c\'el\`ebre is that the technologies offer increased automation, efficiency, and productivity - meanwhile, critics sound warnings of illusions of objectivity, pollution of our information ecosystems, and reproduction of biases and discriminatory outcomes. This paper explores patterns of motivation in the general population for trusting (or distrusting) 'AI' systems. Based on a survey with more than 450 respondents from more than 30 different countries (and about 3000 open text answers), this paper presents a qualitative analysis of current opinions and thoughts about 'AI' technology, focusing on reasons for trusting such systems. The different reasons are synthesized into four rationales (lines of reasoning): the Human favoritism rationale, the Black box rationale, the OPSEC rationale, and the 'Wicked world, tame computers' rationale. These rationales provide insights into human motivation for trusting 'AI' which could be relevant for developers and designers of such systems, as well as for scholars developing measures of trust in technological systems.
翻译:暂无翻译