Artificial Intelligence (AI) is about making computers that do the sorts of things that minds can do, and as we progress towards this goal, we tend to increasingly delegate human tasks to machines. However, AI systems usually do these tasks with an unusual imbalance of insight and understanding: new, deeper insights are present, yet many important qualities that a human mind would have previously brought to the activity are utterly absent. Therefore, it is crucial to ask which features of minds have we replicated, which are missing, and if that matters. One core feature that humans bring to tasks, when dealing with the ambiguity, emergent knowledge, and social context presented by the world, is reflection. Yet this capability is utterly missing from current mainstream AI. In this paper we ask what reflective AI might look like. Then, drawing on notions of reflection in complex systems, cognitive science, and agents, we sketch an architecture for reflective AI agents, and highlight ways forward.
翻译:人工智能(AI)是指使计算机能够做某些事情,而随着我们朝着这个目标前进,我们倾向于越来越多地将人的任务委托给机器。然而,人工智能系统通常以非同寻常的洞察力和理解不平衡来完成这些任务:有新的、更深层次的洞察力,但人类思想以前会给活动带来的许多重要品质却完全不存在。因此,必须问一问我们复制了哪些思想特征,哪些特征是缺失的,如果重要的话。当人类处理世界提出的模糊不清、新兴知识和社会背景时,一个核心特征是反思。然而,目前的主流人工智能完全缺乏这种能力。我们在此文件中询问,在复杂的系统、认知科学和代理人中,我们根据反思的理念,为反省人工智能代理设计一个架构,并突出前进方向。