Recent work has shown that large pretrained Language Models (LMs) can not only perform remarkably well on a range of Natural Language Processing (NLP) tasks but also start improving on reasoning tasks such as arithmetic induction, symbolic manipulation, and commonsense reasoning with increasing size of models. However, it is still unclear what the underlying capabilities of these LMs are. Surprisingly, we find that these models have limitations on certain basic symbolic manipulation tasks such as copy, reverse, and addition. When the total number of symbols or repeating symbols increases, the model performance drops quickly. We investigate the potential causes behind this phenomenon and examine a set of possible methods, including explicit positional markers, fine-grained computation steps, and LMs with callable programs. Experimental results show that none of these techniques can solve the simplest addition induction problem completely. In the end, we introduce LMs with tutor, which demonstrates every single step of teaching. LMs with tutor is able to deliver 100% accuracy in situations of OOD and repeating symbols, shedding new insights on the boundary of large LMs in induction.
翻译:最近的工作表明,大型预先培训的语言模型不仅在一系列自然语言处理任务方面表现出色,而且能够开始改进逻辑学任务,例如算术上岗、象征性操纵和随着模型规模的扩大而进行常识推理等。然而,仍然不清楚这些模型的基本能力是什么。令人惊讶的是,我们发现这些模型对某些基本的象征性操作任务,如复制、反转和添加等具有局限性。当符号或重复符号的总数增加时,模型性能迅速下降。我们调查了这一现象的潜在原因,并研究了一套可能的方法,包括明确的定位标记、细微的计算步骤和具有可选程序的LMS。实验结果显示,这些技术都无法完全解决最简单增加的上岗问题。最后,我们向导师介绍LMS,它展示了教学的每一步。在OD和重复符号的情况下,与导师一起的LMS能够提供100%的准确性能,对上岗时的大LMs边界进行新的了解。