Humans use causality and hypothetical retrospection in their daily decision-making, planning, and understanding of life events. The human mind, while retrospecting a given situation, think about questions such as "What was the cause of the given situation?", "What would be the effect of my action?", or "Which action led to this effect?". It develops a causal model of the world, which learns with fewer data points, makes inferences, and contemplates counterfactual scenarios. The unseen, unknown, scenarios are known as counterfactuals. AI algorithms use a representation based on knowledge graphs (KG) to represent the concepts of time, space, and facts. A KG is a graphical data model which captures the semantic relationships between entities such as events, objects, or concepts. The existing KGs represent causal relationships extracted from texts based on linguistic patterns of noun phrases for causes and effects as in ConceptNet and WordNet. The current causality representation in KGs makes it challenging to support counterfactual reasoning. A richer representation of causality in AI systems using a KG-based approach is needed for better explainability, and support for intervention and counterfactuals reasoning, leading to improved understanding of AI systems by humans. The causality representation requires a higher representation framework to define the context, the causal information, and the causal effects. The proposed Causal Knowledge Graph (CausalKG) framework, leverages recent progress of causality and KG towards explainability. CausalKG intends to address the lack of a domain adaptable causal model and represent the complex causal relations using the hyper-relational graph representation in the KG. We show that the CausalKG's interventional and counterfactual reasoning can be used by the AI system for the domain explainability.
翻译:人类在日常决策、规划和对生活事件的理解中使用因果关系和假设反射模型。 人类的思维,在回顾特定状况的同时,在思考“ 特定状况的原因是什么” 、 “ 我的行动会有什么影响” 、 或“ 哪些行动导致这种影响 ” 等问题时, 开发了一个世界的因果关系模型, 以较少的数据点来学习, 作出推论, 并设想反事实情景。 未知的、 未知的、 假设的情景被称为反事实。 AI 算法使用基于知识的图表(KG) 来代表时间、空间和事实的概念概念的概念。 一个图形数据模型可以记录事件、 目标或概念等实体之间的语义关系。 现有的KG 代表着根据概念网和WorldNet等词词词词词词句的文字模式所得出的因果关系。 KG 目前的因果关系代表着对反事实推论的难度。 AI 系统使用一个更清楚的直率、 直径直方的K 和直方的K 逻辑框架来解释, 需要更精确的直径直方的直方的K 解释 。, 我们用一个直方的直方的直方的直方的直方的直方的直方的K 解释的K, 解释法解释的直方和直方的直方的直方的K,,, 解释到直方的直方的直方的直方的直方的直方的K, 。