The ability to perform arithmetic tasks is a remarkable trait of human intelligence and might form a critical component of more complex reasoning tasks. In this work, we investigate if the surface form of a number has any influence on how sequence-to-sequence language models learn simple arithmetic tasks such as addition and subtraction across a wide range of values. We find that how a number is represented in its surface form has a strong influence on the model's accuracy. In particular, the model fails to learn addition of five-digit numbers when using subwords (e.g., "32"), and it struggles to learn with character-level representations (e.g., "3 2"). By introducing position tokens (e.g., "3 10e1 2"), the model learns to accurately add and subtract numbers up to 60 digits. We conclude that modern pretrained language models can easily learn arithmetic from very few examples, as long as we use the proper surface representation. This result bolsters evidence that subword tokenizers and positional encodings are components in current transformer designs that might need improvement. Moreover, we show that regardless of the number of parameters and training examples, models cannot learn addition rules that are independent of the length of the numbers seen during training. Code to reproduce our experiments is available at https://github.com/castorini/transformers-arithmetic
翻译:执行算术任务的能力是人类智能的一个显著特征, 并可能构成更复杂的推理任务的关键组成部分。 在这项工作中, 我们调查一个数字的表面形式是否对序列到序列语言模型如何学习简单的算术任务( 例如, “ 3 10e1 2 ” ) 有任何影响。 我们发现, 一个数字以其表面形式表示的方式对模型的准确性有强烈的影响。 特别是, 该模型在使用子词( 例如, “ 32” ) 时没有学到5位数的附加数字, 而它很难用字符级表示( 例如, “ 3 2 ” ) 来学习。 通过引入位置符号( 例如, “ 3 10e1 2 ” ), 模型学会准确添加和减去数字, 最多达60位数。 我们的结论是, 只要我们使用适当的表面表示法, 现代经过预先训练的语言模型很容易从几个例子中学习算算算算算。 这有利于证明子词和位置编码是当前变换设计中可能需要改进的构成部分( ) 。 此外, 我们显示, 无论有多少参数和训练模型中有多少, 都看到, 正在学习的模型的模型是独立的复制/ 。