We re-replicate 14 psychology studies from the Many Labs 2 replication project (Klein et al., 2018) with OpenAI's text-davinci-003 model, colloquially known as GPT3.5. Among the eight studies we could analyse, our GPT sample replicated 37.5% of the original results and 37.5% of the Many Labs 2 results. We could not analyse the remaining six studies, due to an unexpected phenomenon we call the "correct answer" effect. Different runs of GPT3.5 answered nuanced questions probing political orientation, economic preference, judgement, and moral philosophy with zero or near-zero variation in responses: with the supposedly "correct answer." Most but not all of these "correct answers" were robust to changing the order of answer choices. One exception occurred in the Moral Foundations Theory survey (Graham et al., 2009), for which GPT3.5 almost always identified as a conservative in the original condition (N=1,030, 99.6%) and as a liberal in the reverse-order condition (N=1,030, 99.3%). GPT3.5's responses to subsequent questions revealed post-hoc rationalisation; there was a relative bias in the direction of its previously reported political orientation. But both self-reported GPT conservatives and self-reported GPT liberals revealed right-leaning Moral Foundations, although the right-leaning bias of self-reported GPT liberals was weaker. We hypothesise that this pattern was learned from a conservative bias in the model's largely Internet-based training data. Since AI models of the future may be trained on much of the same Internet data as GPT3.5, our results raise concerns that a hypothetical AI-led future may be subject to a diminished diversity of thought.
翻译:我们使用OpenAI的文本-davinci-003模型(GPT3.5)重新复制了Many Labs2复制项目(Klein等人,2018)中的14项心理学研究。在我们可以分析的八项研究中,我们的GPT样本复制了原始结果的37.5%以及Many Labs2结果的37.5%。由于一种意外现象,我们无法分析剩余的六项研究,这种现象被称为“正确答案”效应。GPT3.5的不同运行响应于探索政治倾向,经济偏好,判断和道德哲学的微妙问题,几乎没有变化:这些答案都是所谓的“正确答案”。其中大多数但不是全部“正确答案”对答案选择的更改具有强健性。Moral Foundations Theory调查(Graham等人,2009)的一个例外情况是,GPT 3.5在原始条件下几乎总是自认为保守派(N=1,030,99.6%)在反向顺序条件下几乎总是自认为自由派(N=1,030,99.3%)。GPT3.5对随后问题的响应揭示了事后理性化;它的政治取向有偏向性。但自报GPT保守派和自报GPT自由派都透露出右倾的道德基础,尽管自报GPT自由派的右倾偏好较弱。我们假设这种模式来自模型主要基于互联网的训练数据中的保守偏向。由于未来的AI模型可能会被训练成GPT 3.5一样,大部分使用互联网数据。我们的结果引发了担忧,即未来可能会受到思想多样性的影响。