Artificial Intelligence (AI) is increasingly becoming a trusted advisor in people's lives. A new concern arises if AI persuades people to break ethical rules for profit. Employing a large-scale behavioural experiment (N = 1,572), we test whether AI-generated advice can corrupt people. We further test whether transparency about AI presence, a commonly proposed policy, mitigates potential harm of AI-generated advice. Using the Natural Language Processing algorithm, GPT-2, we generated honesty-promoting and dishonesty-promoting advice. Participants read one type of advice before engaging in a task in which they could lie for profit. Testing human behaviour in interaction with actual AI outputs, we provide first behavioural insights into the role of AI as an advisor. Results reveal that AI-generated advice corrupts people, even when they know the source of the advice. In fact, AI's corrupting force is as strong as humans'.
翻译:人工智能(AI)日益成为人们生活中值得信赖的顾问。如果AI说服人们打破道德规则以牟利,就会引起新的关注。使用大规模的行为实验(N=1,572),我们测试AI产生的建议是否会腐蚀人民。我们进一步测试AI的存在是否透明,这是共同提出的政策,是否减轻了AI产生的建议的潜在伤害。使用自然语言处理算法,GPT-2,我们产生了促进诚实和不诚实的建议。参与者在从事一项他们可以从事的盈利性工作之前阅读了一种建议。在与实际的AI产出互动中测试人类行为,我们首次对AI作为顾问的作用进行行为洞察。结果显示,AI产生的建议会腐蚀人民,即使他们知道建议的来源。事实上,AI的腐败势力和人类一样强大。