Language Models appear to perform poorly on quantification. We ask how badly. 'Few'-type quantifiers, as in 'few children like vegetables' might pose a particular challenge for Language Models, since the sentence components without the quantifier are likely to co-occur, and because 'few'-type quantifiers are rare. We present 960 sentences stimuli from two human neurolinguistic experiments to 22 autoregressive transformer models of differing sizes. Not only do the models perform poorly on 'few'-type quantifiers, but overall the larger the model, the worse its performance. We interpret this inverse scaling as suggesting that larger models increasingly reflect online rather than offline human processing, and argue that decreasing performance of larger models may challenge uses of Language Models as the basis for Natural Language Systems.
翻译:语言模型似乎在量化方面表现不佳。 我们询问了“ Few” 类的量化标准对语言模型来说可能是一个特殊的挑战,因为没有量化标准的句子部分可能同时出现,而且因为“ few” 类的量化标准很罕见。 我们从两个人类神经语言实验到22个不同尺寸的自动递减变异变变模型给出了960个句子的刺激。这些模型不仅在“ few” 类的量化标准上表现不佳,而且总体而言,其性能也更差。我们将此反比尺度解释为更大的模型越来越多地反映在线而不是非在线的人类处理,并论证说,较大模型的不断下降的性能可能会对使用语言模型作为自然语言系统的基础提出挑战。