Despite the surprising power of many modern AI systems that often learn their own representations, there is significant discontent about their inscrutability and the attendant problems in their ability to interact with humans. While alternatives such as neuro-symbolic approaches have been proposed, there is a lack of consensus on what they are about. There are often two independent motivations (i) symbols as a lingua franca for human-AI interaction and (ii) symbols as system-produced abstractions used by the AI system in its internal reasoning. The jury is still out on whether AI systems will need to use symbols in their internal reasoning to achieve general intelligence capabilities. Whatever the answer there is, the need for (human-understandable) symbols in human-AI interaction seems quite compelling. Symbols, like emotions, may well not be sine qua non for intelligence per se, but they will be crucial for AI systems to interact with us humans -- as we can neither turn off our emotions nor get by without our symbols. In particular, in many human-designed domains, humans would be interested in providing explicit (symbolic) knowledge and advice -- and expect machine explanations in kind. This alone requires AI systems to to maintain a symbolic interface for interaction with humans. In this blue sky paper, we argue this point of view, and discuss research directions that need to be pursued to allow for this type of human-AI interaction.
翻译:尽管许多现代的AI系统往往学会自己的表现,其惊人的力量,但人们对其内部推理是否需要使用符号来获得一般情报能力仍然有很大不满。尽管提出了神经-精神共振方法等替代方法,但对于它们所涉及的问题缺乏共识。常常有两个独立动机:(一) 符号作为人类-AI互动的法语语言标志,以及(二) 符号作为AI系统在其内部推理中使用的由系统生成的抽象符号。特别是,在许多人类设计的领域,人类是否有兴趣在其内部推理中使用符号来获得一般情报能力。无论答案是什么,人类-AI互动中需要(人类无法理解的)符号似乎相当迫切。符号,就像情感一样,可能并非每个智能的必备条件,但对于AI系统与人类互动至关重要,因为我们既不能摆脱情感,也不能摆脱自己的符号。特别是在许多人类设计的领域,人类将有兴趣提供明确的(符号)知识和建议,并期望机器解释在人类-AI互动中具有某种象征意义。这需要AI系统与人类互动的蓝图来讨论。