The transformer architecture, introduced by Vaswani et al. (2017), is at the heart of the remarkable recent progress in the development of language models, including widely-used chatbots such as Chat-GPT and Claude. In this paper, I argue that we can extract from the way the transformer architecture works a theory of the relationship between context and meaning. I call this the transformer theory, and I argue that it is novel with regard to two related philosophical debates: the contextualism debate regarding the extent of context-sensitivity across natural language, and the polysemy debate regarding how polysemy should be captured within an account of word meaning.
翻译:暂无翻译