Artificial intelligence (AI) technologies revolutionize vast fields of society. Humans using these systems are likely to expect them to work in a potentially hyperrational manner. However, in this study, we show that some AI systems, namely large language models (LLMs), exhibit behavior that strikingly resembles human-like intuition - and the many cognitive errors that come with them. We use a state-of-the-art LLM, namely the latest iteration of OpenAI's Generative Pre-trained Transformer (GPT-3.5), and probe it with the Cognitive Reflection Test (CRT) as well as semantic illusions that were originally designed to investigate intuitive decision-making in humans. Our results show that GPT-3.5 systematically exhibits "machine intuition," meaning that it produces incorrect responses that are surprisingly equal to how humans respond to the CRT as well as to semantic illusions. We investigate several approaches to test how sturdy GPT-3.5's inclination for intuitive-like decision-making is. Our study demonstrates that investigating LLMs with methods from cognitive science has the potential to reveal emergent traits and adjust expectations regarding their machine behavior.
翻译:人工智能(AI)技术使社会广大领域发生革命。 使用这些系统的人类可能期望他们以潜在的超度方式工作。 然而,在本研究中,我们显示一些人工智能系统,即大型语言模型(LLMs),表现出惊人类似于人类直觉的行为,以及随之而来的许多认知错误。我们使用最先进的LLM,即OpenAI的General Pregive-reduct 变异器(GPT-3.5)的最新迭代,并用认知反射测试(CRT)和最初设计用于调查人类不直观决策的语义错觉来进行检测。我们的研究结果表明,GPT-3.5系统展示了“机械直觉”的图,这意味着它产生的不正确反应与人类如何响应CRT以及语义错觉错觉等同。我们调查了检验GPT-3.5型变异倾向如何直观决策的方法。我们的研究显示,通过认知科学方法对LMs的调查,其机能显示其机能调整和行为的能力。