Persuasion is a key aspect of what it means to be human, and is central to business, politics, and other endeavors. Advancements in artificial intelligence (AI) have produced AI systems that are capable of persuading humans to buy products, watch videos, click on search results, and more. Even systems that are not explicitly designed to persuade may do so in practice. In the future, increasingly anthropomorphic AI systems may form ongoing relationships with users, increasing their persuasive power. This paper investigates the uncertain future of persuasive AI systems. We examine ways that AI could qualitatively alter our relationship to and views regarding persuasion by shifting the balance of persuasive power, allowing personalized persuasion to be deployed at scale, powering misinformation campaigns, and changing the way humans can shape their own discourse. We consider ways AI-driven persuasion could differ from human-driven persuasion. We warn that ubiquitous highlypersuasive AI systems could alter our information environment so significantly so as to contribute to a loss of human control of our own future. In response, we examine several potential responses to AI-driven persuasion: prohibition, identification of AI agents, truthful AI, and legal remedies. We conclude that none of these solutions will be airtight, and that individuals and governments will need to take active steps to guard against the most pernicious effects of persuasive AI.
翻译:人工智能(AI)的进步产生了人工智能系统,能够说服人类购买产品、观看视频、点击搜索结果等等。即使没有明确设计以说服为目的的系统在实践中也可能这样做。在未来,日益人类形态的人工智能系统可能会形成与用户的持续关系,增加其说服力。本文调查了有说服力的人工智能系统前途不明的前途。我们研究了人工智能如何通过改变说服力的平衡、允许个人化的说服力在规模上部署、授权错误信息运动以及改变人类能够塑造自己言论的方式,从质量上改变我们与说服的关系和对说服的看法。我们考虑了人工智能驱动的说服方法可能不同于人类驱动的说服方法。我们警告说,无处不在的高度感动性的人工智能系统可能会大大改变我们的信息环境,从而导致人类失去对自身未来的控制。我们研究了人工智能驱动的说服能力,我们研究了一些可能的对策:禁止、识别AI代理、真实的AI以及改变人性化的说服力运动以及改变人类言论的方式。我们发现,对于政府来说,这些最致命的、最有说服力的措施是没有说服力的。</s>