Recent applications of autonomous agents and robots, such as self-driving cars, scenario-based trainers, exploration robots, and service robots have brought attention to crucial trust-related challenges associated with the current generation of artificial intelligence (AI) systems. AI systems based on the connectionist deep learning neural network approach lack capabilities of explaining their decisions and actions to others, despite their great successes. Without symbolic interpretation capabilities, they are black boxes, which renders their decisions or actions opaque, making it difficult to trust them in safety-critical applications. The recent stance on the explainability of AI systems has witnessed several approaches on eXplainable Artificial Intelligence (XAI); however, most of the studies have focused on data-driven XAI systems applied in computational sciences. Studies addressing the increasingly pervasive goal-driven agents and robots are still missing. This paper reviews approaches on explainable goal-driven intelligent agents and robots, focusing on techniques for explaining and communicating agents perceptual functions (example, senses, and vision) and cognitive reasoning (example, beliefs, desires, intention, plans, and goals) with humans in the loop. The review highlights key strategies that emphasize transparency, understandability, and continual learning for explainability. Finally, the paper presents requirements for explainability and suggests a roadmap for the possible realization of effective goal-driven explainable agents and robots.
翻译:最近自主代理和机器人的应用,如自行驾驶汽车、情景型培训员、探索机器人和服务机器人的应用,使人们注意到与目前一代人工智能系统相关的关键信任相关挑战。基于连接者深学习神经网络方法的AI系统缺乏向他人解释其决定和行动的能力,尽管它们取得了巨大成功。没有象征性的解释能力,它们是黑箱,使其决定或行动变得不透明,使其难以在安全关键应用中信任它们。最近关于AI系统的可解释性的立场,目睹了电子可复制人工智能(XAI)方面的几种方法;然而,大多数研究侧重于计算科学中应用的数据驱动XAI系统。关于日益普及的目标驱动剂和机器人的研究仍然缺失。本文回顾了解释可解释的目标驱动剂和机器人的方法,侧重于解释和传递代理人感官功能(示例、感知和愿景)的技术,以及认知推理(实例、信仰、愿望、意图、计划和目标)的方法;然而,大多数研究侧重于数据驱动力的XAI系统系统系统系统系统;审查展示了可理解性目标、可持续实现性目标的关键战略,从而解释易理解性、解释性文件的可理解性。审查显示可理解性、可持续实现性目标、可理解性目标、解释性、解释性、解释性战略。