In this paper we apply our understanding of the radical enactivist agenda to a classic AI-hard problem. Natural Language Understanding is a sub-field of AI research that looked easy to the pioneers. Thus the Turing Test, in its original form, assumed that the computer could use language and the challenge was to fake human intelligence. It turned out that playing chess and formal logic were easy compared to the necessary language skills. The techniques of good old-fashioned AI (GOFAI) assume symbolic representation is the core of reasoning and human communication consisted of transferring representations from one mind to another. But by this model one finds that representations appear in another's mind, without appearing in the intermediary language. People communicate by mind reading it seems. Systems with speech interfaces such as Alexa and Siri are of course common but they are limited. Rather than adding mind reading skills, we introduced a "cheat" that enabled our systems to fake it. The cheat is simple and only slightly interesting to computer scientists and not at all interesting to philosophers. However, reading about the enactivist idea that we "directly perceive" the intentions of others, our cheat took on a new light and in this paper look again at how natural language understanding might actually work between humans.
翻译:在本文中,我们将我们对激进的成文主义议程的理解应用到一个经典的AI-hard问题。自然语言理解是AI研究的子领域,对先驱者来说似乎很容易。因此,图灵试验最初的形式假定计算机可以使用语言,挑战在于伪造人类智慧。结果发现,与必要的语言技能相比,打象棋和正式逻辑很容易。古老的AI(GOFAI)技术的象征性代表性是推理和人类交流的核心,它包括将一个思想的表达方式转移到另一个思想中。但是,通过这个模型,人们发现,在另一个思想中出现了一种表达方式,而没有出现在中间语言中。人们通过思想进行交流似乎是这样看的。具有Alexa和Siri等语言界面的系统是常见的,但它们是有限的。我们没有增加读心学技能,而是引入了一种“热”使我们的系统能够伪造它。这种欺骗对计算机科学家来说很简单,而且只是略为有趣,对哲学家来说并不有趣。但是,读一下我们“直接认识”他人意图的执教思想思想,我们“如何直接理解”别人的意图,我们用新的和在纸上又重新看待新的理解。