A key aim of science is explanation, yet the idea of explaining language phenomena has taken a backseat in mainstream Natural Language Processing (NLP) and many other areas of Artificial Intelligence. I argue that explanation of linguistic behaviour should be a main goal of NLP, and that this is not the same as making NLP models explainable. To illustrate these ideas, some recent models of human language production are compared with each other. I conclude by asking what it would mean for NLP research and institutional policies if our community took explanatory value seriously, while heeding some possible pitfalls.
翻译:科学的一个关键目标是解释,然而,解释语言现象的想法在主流自然语言处理(NLP)和人造情报的许多其他领域中占据了后台。 我认为,解释语言行为应该成为NLP的主要目标,这与解释NLP模式并不相同。为了说明这些想法,一些最新的人类语言生产模式相互比较。 最后,我问,如果我们社区认真对待解释价值,同时留意一些可能的错误,那对NLP研究和机构政策意味着什么。