We propose a method for using a large language model, such as GPT-3, to simulate responses of different humans in a given context. We test our method by attempting to reproduce well-established economic, psycholinguistic, and social experiments. The method requires prompt templates for each experiment. Simulations are run by varying the (hypothetical) subject details, such as name, and analyzing the text generated by the language model. To validate our methodology, we use GPT-3 to simulate the Ultimatum Game, garden path sentences, risk aversion, and the Milgram Shock experiments. In order to address concerns of exposure to these studies in training data, we also evaluate simulations on novel variants of these studies. We show that it is possible to simulate responses of different people and that their responses are largely consistent with prior human studies from the literature. Using large language models as simulators offers advantages but also poses risks. Our use of a language model for simulation is contrasted with anthropomorphic views of a language model as having its own behavior.
翻译:我们提出一种方法,用于使用大型语言模型,如GPT-3,在特定情况下模拟不同人类的反应。我们通过尝试复制成熟的经济、精神语言和社会实验来测试我们的方法。该方法要求每项实验的快速模板。模拟用不同(假冒)主题细节运行,如名称,分析语言模型产生的文字。为了验证我们的方法,我们用GPT-3模拟极终通游戏、花园路径句子、风险反转和米尔格拉姆冲击实验。为了解决在培训数据中暴露于这些研究的担忧,我们还评估了这些研究的新变异的模拟。我们表明,模拟不同人群的反应是可能的,他们的反应基本上与以前文学的人类研究一致。使用大语言模型作为模拟者既有优势,也有风险。我们模拟使用语言模型与语言模型的人类形态观点形成对比,认为它本身的行为。