The underlying hypothesis of knowledge-based explainable artificial intelligence is the data required for data-centric artificial intelligence agents (e.g., neural networks) are less diverse in contents than the data required to explain the decisions of such agents to humans. The idea is that a classifier can attain high accuracy using data that express a phenomenon from one perspective whereas the audience of explanations can entail multiple stakeholders and span diverse perspectives. We hence propose to use domain knowledge to complement the data used by agents. We formulate knowledge-based explainable artificial intelligence as a supervised data classification problem aligned with the CBR methodology. In this formulation, the inputs are case problems composed of both the inputs and outputs of the data-centric agent and case solutions, the outputs, are explanation categories obtained from domain knowledge and subject matter experts. This formulation does not typically lead to an accurate classification, preventing the selection of the correct explanation category. Knowledge-based explainable artificial intelligence extends the data in this formulation by adding features aligned with domain knowledge that can increase accuracy when selecting explanation categories.
翻译:以知识为基础的可解释人工情报的基本假设是,数据中心人工情报人员(例如神经网络)所需的数据在内容上不如解释这些情报人员向人类作出的决定所需的数据多,其想法是,分类人员利用从一个角度显示一种现象的数据可以达到很高的准确性,而解释对象则需要多个利益攸关方,并有多种视角。因此,我们提议利用域知识来补充代理人使用的数据。我们把基于知识的可解释人工情报作为与CBR方法一致的受监督数据分类问题来编制。在这种表述中,投入是由数据中心代理和案例解决办法的投入和产出构成的问题,而产出是来自域知识和主题事项专家的解释类别。这种表述通常不会导致准确的分类,从而妨碍正确选择解释类别。基于知识的人工情报增加了与域知识相一致的特征,从而扩大了这一表述中的数据。在选择解释类别时,可以提高准确性。