Large Language Models (LLMs) have been transformative. They are pre-trained foundational models that are self-supervised and can be adapted with fine tuning to a wide range of natural language tasks, each of which previously would have required a separate network model. This is one step closer to the extraordinary versatility of human language. GPT-3 and more recently LaMDA can carry on dialogs with humans on many topics after minimal priming with a few examples. However, there has been a wide range of reactions and debate on whether these LLMs understand what they are saying or exhibit signs of intelligence. This high variance is exhibited in three interviews with LLMs reaching wildly different conclusions. A new possibility was uncovered that could explain this divergence. What appears to be intelligence in LLMs may in fact be a mirror that reflects the intelligence of the interviewer, a remarkable twist that could be considered a Reverse Turing Test. If so, then by studying interviews we may be learning more about the intelligence and beliefs of the interviewer than the intelligence of the LLMs. As LLMs become more capable they may transform the way we interact with machines and how they interact with each other. Increasingly, LLMs are being coupled with sensorimotor devices. LLMs can talk the talk, but can they walk the walk? A road map for achieving artificial general autonomy is outlined with seven major improvements inspired by brain systems. LLMs could be used to uncover new insights into brain function by downloading brain data during natural behaviors.
翻译:大型语言模型(LLMS)具有变革性,它们是经过预先训练的基本模型,是自我监督的,可以调整,适应广泛的自然语言任务,每个自然语言任务以前都需要一个单独的网络模式。这是接近人类语言超常多功能的一步。GPT-3和最近的LAMDA可以在与几个例子擦拭最少之后,就许多专题与人对话。但是,对于这些LMS是否理解他们正在说什么或表现出智慧的迹象,人们有着广泛的反应和辩论。这种高度差异表现在与LLMS的三次访谈中,其结果大相径庭。发现了一种新的可能性,可以解释这种差异。LMMS的智慧似乎是一种反映访谈者智慧的镜子,一种显著的曲折,可以被认为是反动的图示。如果这样的话,我们通过研究访谈,可能比LMS的新智慧更多地了解采访者的智慧和信念。LMS能够更有能力改变我们与机器互动的方式,而他们又能够与LMS的大脑运动进行自我了解。LMS的自我定位。LMS可以与每个主程进行互动。