Large Language Models (LLMs) have been transformative. They are pre-trained foundational models that can be adapted with fine tuning to many different natural language tasks, each of which previously would have required a separate network model. This is one step closer to the extraordinary versatility of human language. GPT-3 and more recently LaMDA can carry on dialogs with humans on many topics after minimal priming with a few examples. However, there has been a wide range of reactions on whether these LLMs understand what they are saying or exhibit signs of intelligence. This high variance is exhibited in three interviews with LLMs reaching wildly different conclusions. A new possibility was uncovered that could explain this divergence. What appears to be intelligence in LLMs may in fact be a mirror that reflects the intelligence of the interviewer, a remarkable twist that could be considered a Reverse Turing Test. If so, then by studying interviews we may be learning more about the intelligence and beliefs of the interviewer than the intelligence of the LLMs. As LLMs become more capable they may transform the way we access and use information.
翻译:大型语言模型(LLMS)是变革型的,是经过预先训练的基本模型,可以微调适应许多不同的自然语言任务,其中每一种都以前需要单独的网络模型。这距离人类语言的超常多功能更近了一步。GPT-3和最近LAMDA在用几个例子来讨论许多专题后可以与人类进行对话。然而,对于这些LLMS是否理解他们所说的话或表现出的智慧迹象,人们有各种各样的反应。这种巨大的差异表现在与LLMS进行的三次访谈中,得出了完全不同的结论。发现了一种新的可能性,可以解释这种差异。LMS中的情报似乎是一种反映采访者智慧的镜子,一种显著的曲折,可以被认为是一种反动图解试验。如果是这样的话,我们通过研究访谈,可能比LMS的智慧更多地了解采访者的智慧和信念。随着LMS的更有能力改变我们获取和使用信息的方式。