In decision support systems the motivation and justification of the system's diagnosis or classification is crucial for the acceptance of the system by the human user. In Bayesian networks a diagnosis or classification is typically formalized as the computation of the most probable joint value assignment to the hypothesis variables, given the observed values of the evidence variables (generally known as the MAP problem). While solving the MAP problem gives the most probable explanation of the evidence, the computation is a black box as far as the human user is concerned and it does not give additional insights that allow the user to appreciate and accept the decision. For example, a user might want to know to whether an unobserved variable could potentially (upon observation) impact the explanation, or whether it is irrelevant in this aspect. In this paper we introduce a new concept, MAP- independence, which tries to capture this notion of relevance, and explore its role towards a potential justification of an inference to the best explanation. We formalize several computational problems based on this concept and assess their computational complexity.
翻译:在决策支持系统中,系统诊断或分类的动机和理由对于人类用户接受系统至关重要。在巴伊西亚网络中,诊断或分类通常是正式化的,因为考虑到观察到的证据变量的数值(通常称为MAP问题),对假设变量进行最可能的共同价值分配。虽然解决MAP问题最有可能解释证据,但就人类用户而言,计算是一个黑盒,它不会提供更多洞察力,使用户能够理解和接受决定。例如,用户可能想知道一个未观察到的变量是否会(观察)影响解释,或是否与此无关。在本文中,我们引入一个新的概念,即MAP-独立性,试图捕捉这一相关性的概念,并探讨其作用,以便为推断最佳解释提供可能的理由。我们根据这个概念将若干计算问题正式化,并评估其计算复杂性。