The increasing adoption of artificial intelligence requires accurate forecasts and means to understand the reasoning of artificial intelligence models behind such a forecast. Explainable Artificial Intelligence (XAI) aims to provide cues for why a model issued a certain prediction. Such cues are of utmost importance to decision-making since they provide insights on the features that influenced most certain forecasts and let the user decide if the forecast can be trusted. Though many techniques were developed to explain black-box models, little research was done on assessing the quality of those explanations and their influence on decision-making. We propose an ontology and knowledge graph to support collecting feedback regarding forecasts, forecast explanations, recommended decision-making options, and user actions. This way, we provide means to improve forecasting models, explanations, and recommendations of decision-making options. We tailor the knowledge graph for the domain of demand forecasting and validate it on real-world data.
翻译:人工智能(XAI)旨在为为什么模型发布某种预测提供提示,这些提示对决策至关重要,因为这些提示提供了影响大多数预测的特征的洞察力,并让用户决定预测是否可信。虽然已经开发了许多技术来解释黑盒模型,但对评估这些解释的质量及其对决策的影响的研究很少。我们提出了一份本体学和知识图,以支持收集预测、预测解释、建议的决策选项和用户行动的反馈。我们通过这种方法提供了改进预测模型、解释和决策选项的建议的手段。我们为需求预测领域设计了知识图,并在现实世界数据上验证了它。