Advances in computational methods and big data availability have recently translated into breakthroughs in AI applications. With successes in bottom-up challenges partially overshadowing shortcomings, the 'human-like' performance of Large Language Models has raised the question of how linguistic performance is achieved by algorithms. Given systematic shortcomings in generalization across many AI systems, in this work we ask whether linguistic performance is indeed guided by language knowledge in Large Language Models. To this end, we prompt GPT-3 with a grammaticality judgement task and comprehension questions on less frequent constructions that are thus unlikely to form part of Large Language Models' training data. These included grammatical 'illusions', semantic anomalies, complex nested hierarchies and self-embeddings. GPT-3 failed for every prompt but one, often offering answers that show a critical lack of understanding even of high-frequency words used in these less frequent grammatical constructions. The present work sheds light on the boundaries of the alleged AI human-like linguistic competence and argues that, far from human-like, the next-word prediction abilities of LLMs may face issues of robustness, when pushed beyond training data.
翻译:计算方法和大数据提供方面的进步最近已转化为AI应用方面的突破。随着在自下而上的挑战中取得的成功,部分掩盖了缺点,大语言模型的“人性化”表现提出了如何通过算法实现语言表现的问题。鉴于许多AI系统普遍化方面的系统性缺陷,我们在此工作中询问语言表现是否确实以大语言模型的语言知识为指导。为此,我们促使GPT-3使用一个语法判断任务和理解关于较少的建筑学问题,因此不可能成为大语言模型培训数据的一部分。其中包括语法学“象”、语法异常、复杂的嵌套式等级和自我编织。GPT-3在每一个迅速但都是失败的,常常提供答案,表明即使对于这些不太频繁的语法构造中使用的高频词也严重缺乏理解。目前的工作揭示了所谓的AI类语言能力的界限,并论证说,远非人类一样,LMMs的下一个词预测能力可能面临稳健的问题,而不只是数据。</s>