We re-replicate 14 psychology studies from the Many Labs 2 replication project (Klein et al., 2018) with OpenAI's text-davinci-003 model, colloquially known as GPT3.5. Among the eight studies we could analyse, our GPT sample replicated 37.5% of the original results and 37.5% of the Many Labs 2 results. We could not analyse the remaining six studies, due to an unexpected phenomenon we call the "correct answer" effect. Different runs of GPT3.5 answered nuanced questions probing political orientation, economic preference, judgement, and moral philosophy with zero or near-zero variation in responses: with the supposedly "correct answer." Most but not all of these "correct answers" were robust to changing the order of answer choices. One exception occurred in the Moral Foundations Theory survey (Graham et al., 2009), for which GPT3.5 almost always identified as a conservative in the original condition (N=1,030, 99.6%) and as a liberal in the reverse-order condition (N=1,030, 99.3%). GPT3.5's responses to subsequent questions revealed post-hoc rationalisation; there was a relative bias in the direction of its previously reported political orientation. But both self-reported GPT conservatives and self-reported GPT liberals revealed right-leaning Moral Foundations, although the right-leaning bias of self-reported GPT liberals was weaker. We hypothesise that this pattern was learned from a conservative bias in the model's largely Internet-based training data. Since AI models of the future may be trained on much of the same Internet data as GPT3.5, our results raise concerns that a hypothetical AI-led future may be subject to a diminished diversity of thought.
翻译:我们使用OpenAI的text-davinci-003模型,即GPT3.5,对Many Labs 2复制项目(Klein等人,2018)中的14项心理学研究进行了二次复制。在我们能够分析的八项研究中,我们的GPT样本复制了37.5%的原始结果和37.5%的Many Labs 2结果。由于我们称之为“正确答案”的意外现象,我们无法分析剩余的六个研究。GPT3.5以零或接近零变化的方式回答了探讨政治取向、经济倾向、判断和道德哲学问题的细微问题,其中大多数但并非全部的“正确答案”对更改答案顺序具有鲁棒性。其中一个例外出现在Moral Foundations Theory调查(Graham等人,2009)中,对于原始条件下的保守主义者,GPT3.5几乎总是这样认定(N=1,030,99.6%),对于反序条件下的自由主义者,GPT3.5几乎总是这样认定(N=1,030,99.3%)。GPT3.5对随后问题的回答揭示了事后理性化;它的政治取向偏向于以前报告的方向。但是,自报保守主义者和自报自由主义者均揭示了右倾的Moral Foundations,尽管自报自由主义者的右倾偏差较小。我们假设,这种模式是从模型的训练数据中学得的保守主义偏见。由于未来的人工智能模型可能受到与GPT3.5相同的互联网数据培训,我们的结果引出了一些担忧,即假设的由人工智能主导的未来可能面临思想多样性不足的问题。