Despite the surprising power of many modern AI systems that often learn their own representations, there is significant discontent about their inscrutability and the attendant problems in their ability to interact with humans. While alternatives such as neuro-symbolic approaches have been proposed, there is a lack of consensus on what they are about. There are often two independent motivations (i) symbols as a lingua franca for human-AI interaction and (ii) symbols as (system-produced) abstractions use in its internal reasoning. The jury is still out on whether AI systems will need to use symbols in their internal reasoning to achieve general intelligence capabilities. Whatever the answer there is, the need for (human-understandable) symbols in human-AI interaction seems quite compelling. Symbols, like emotions, may well not be sine qua non for intelligence per se, but they will be crucial for AI systems to interact with us humans--as we can neither turn off our emotions nor get by without our symbols. In particular, in many human-designed domains, humans would be interested in providing explicit (symbolic) knowledge and advice--and expect machine explanations in kind. This alone requires AI systems to at least do their I/O in symbolic terms. In this blue sky paper, we argue this point of view, and discuss research directions that need to be pursued to allow for this type of human-AI interaction.
翻译:尽管许多现代AI系统往往学会自己的表现,其惊人的力量往往令人吃惊,但对于这些系统是否需要在其内部推理中使用符号来实现一般情报能力,陪审团仍然大为不满。尽管提出了神经-精神共振方法等替代方法,但缺乏共识。常常有两种独立动机:(一) 符号作为人类-AI互动的法语语言标志,以及(二) 符号作为(系统制造的)内部推理中使用的抽象符号;特别是,在许多人类设计的领域,人类是否有兴趣提供明确的(模拟)知识和咨询意见以获得一般情报能力。无论答案是什么,人类-AI互动中(人类无法理解的)符号的必要性似乎相当迫切。符号和情绪一样,对于每个情报而言可能并非必要条件,但对于AI系统与人类互动至关重要,因为我们既不能拒绝我们的情感,也不能没有我们的符号。特别是在许多人类设计的领域,人类将有兴趣提供明确的(模拟)知识和咨询意见。我们只需用机器解释来进行这种象征性的图像。仅需要AI系统来进行这种象征性的天空互动。