When human cognition is modeled in Philosophy and Cognitive Science, there is a pervasive idea that humans employ mental representations in order to navigate the world and make predictions about outcomes of future actions. By understanding how these representational structures work, we not only understand more about human cognition but also gain a better understanding for how humans rationalise and explain decisions. This has an influencing effect on explainable AI, where the goal is to provide explanations of computer decision-making for a human audience. We show that the Contextual Importance and Utility method for XAI share an overlap with the current new wave of action-oriented predictive representational structures, in ways that makes CIU a reliable tool for creating explanations that humans can relate to and trust.
翻译:当人类认知以哲学和认知科学为模范时,人们普遍认为人类利用心理表现来环游世界,并对未来行动的结果作出预测。通过理解这些表现结构如何运作,我们不仅更了解人类认知结构,而且更了解人类理性和解释决策的方式。这对可解释的人工智能有影响,其目的在于为人类受众提供计算机决策的解释。我们显示,XAI的背景重要性和效用方法与当前新的面向行动的预测代表性结构浪潮有重叠,使CIU成为提供解释的可靠工具,使人们能够理解和信任人类。