Large language models (LLMs) have exhibited impressive competency in various text-related tasks. However, their opaque internal mechanisms become a hindrance to leveraging them in mathematical problems. In this paper, we study a fundamental question: whether language models understand numbers, which play a basic element in mathematical problems. We assume that to solve mathematical problems, language models should be capable of understanding numbers and compressing these numbers in their hidden states. We construct a synthetic dataset comprising addition problems and utilize linear probes to read out input numbers from the hidden states of models. Experimental results demonstrate evidence supporting the existence of compressed numbers in the LLaMA-2 model family from early layers. However, the compression process seems to be not lossless, presenting difficulty in precisely reconstructing the original numbers. Further experiments show that language models can utilize the encoded numbers to perform arithmetic computations, and the computational ability scales up with the model size. Our preliminary research suggests that language models exhibit a partial understanding of numbers, offering insights into future investigations about the models' capability of solving mathematical problems.
翻译:暂无翻译