How does one measure "ability to understand language"? If it is a person's ability that is being measured, this is a question that almost never poses itself in an unqualified manner: Whatever formal test is applied, it takes place on the background of the person's language use in daily social practice, and what is measured is a specialised variety of language understanding (e.g., of a second language; or of written, technical language). Computer programs do not have this background. What does that mean for the applicability of formal tests of language understanding? I argue that such tests need to be complemented with tests of language use embedded in a practice, to arrive at a more comprehensive evaluation of "artificial language understanding". To do such tests systematically, I propose to use "Dialogue Games" -- constructed activities that provide a situational embedding for language use. I describe a taxonomy of Dialogue Game types, linked to a model of underlying capabilites that are tested, and thereby giving an argument for the \emph{construct validity} of the test. I close with showing how the internal structure of the taxonomy suggests an ordering from more specialised to more general situational language understanding, which potentially can provide some strategic guidance for development in this field.
翻译:“如何测量‘理解语言的能力’?”如果是测量人的能力,这几乎从不以未经修饰的方式提出:无论应用何种正式测试,都是在人的日常社交实践语言使用的背景下进行的,所测量的是特定的语言理解能力(例如第二种语言或 written,技术语言的理解)。计算机程序没有这种背景。这对“人造语言理解”的正式测试的适用性意味着什么?我认为这些测试需要补充嵌入在实践中的语言使用测试,以获得更全面的“人工语言理解”评估。为了系统地进行这样的测试,我建议使用“对话游戏”——构建的活动,为语言使用提供情境嵌入。我描述了对话游戏类型的分类法,与被测试的基本能力的模型相关联,并因此提供了对测试的“构建效度”的论证。我最后展示了分类法的内在结构如何暗示了从更专业到更一般的情境语言理解排序,这潜在地为本领域的发展提供了一些战略指导。