How do humans learn language, and can the first language be learned at all? These fundamental questions are still hotly debated. In contemporary linguistics, there are two major schools of thought that give completely opposite answers. According to Chomsky's theory of universal grammar, language cannot be learned because children are not exposed to sufficient data in their linguistic environment. In contrast, usage-based models of language assume a profound relationship between language structure and language use. In particular, contextual mental processing and mental representations are assumed to have the cognitive capacity to capture the complexity of actual language use at all levels. The prime example is syntax, i.e., the rules by which words are assembled into larger units such as sentences. Typically, syntactic rules are expressed as sequences of word classes. However, it remains unclear whether word classes are innate, as implied by universal grammar, or whether they emerge during language acquisition, as suggested by usage-based approaches. Here, we address this issue from a machine learning and natural language processing perspective. In particular, we trained an artificial deep neural network on predicting the next word, provided sequences of consecutive words as input. Subsequently, we analyzed the emerging activation patterns in the hidden layers of the neural network. Strikingly, we find that the internal representations of nine-word input sequences cluster according to the word class of the tenth word to be predicted as output, even though the neural network did not receive any explicit information about syntactic rules or word classes during training. This surprising result suggests, that also in the human brain, abstract representational categories such as word classes may naturally emerge as a consequence of predictive coding and processing during language acquisition.
翻译:人类如何学习语言, 以及能够学习第一种语言呢? 这些基本问题仍然在激烈争论之中。 在当代语言中, 有两大思想流派提供了完全相反的答案。 根据Chomsky的通用语法理论, 语言无法被学习, 因为儿童在语言环境中没有接触到足够的数据。 相反, 基于使用的语言模式假定语言结构和语言使用之间有着深刻的关系。 特别是, 背景的心理处理和心理表现被认为具有认知能力, 能够捕捉各级实际语言使用的复杂性。 特别是, 我们训练了一个深层次的神经网络, 以预测下一个词, 将言词组组成为更大的单位, 如句等。 一般来说, 语组规则被表现为语言类的顺序。 然而, 仍然不清楚语言课是天生的, 正如基于使用的方法所暗示的那样, 语言的模型是在语言结构中出现。 在这里, 我们从机器学习和自然语言处理的角度来解决这个问题。 特别是, 我们训练了一个深层次的神经网络网络网络 预测下一个词组, 提供这种连续的语系的顺序, 也就是在连续的语系结构结构结构中,,, 之后, 我们通过学习了语言输入的顺序, 。