It is widely acknowledged that transparency of automated decision making is crucial for deployability of intelligent systems, and explaining the reasons why some decisions are "good" and some are not is a way to achieving this transparency. We consider two variants of decision making, where "good" decisions amount to alternatives (i) meeting "most" goals, and (ii) meeting "most preferred" goals. We then define, for each variant and notion of "goodness" (corresponding to a number of existing notions in the literature), explanations in two formats, for justifying the selection of an alternative to audiences with differing needs and competences: lean explanations, in terms of goals satisfied and, for some notions of "goodness", alternative decisions, and argumentative explanations, reflecting the decision process leading to the selection, while corresponding to the lean explanations. To define argumentative explanations, we use assumption-based argumentation (ABA), a well-known form of structured argumentation. Specifically, we define ABA frameworks such that "good" decisions are admissible ABA arguments and draw argumentative explanations from dispute trees sanctioning this admissibility. Finally, we instantiate our overall framework for explainable decision-making to accommodate connections between goals and decisions in terms of decision graphs incorporating defeasible and non-defeasible information.
翻译:人们普遍承认,自动化决策的透明度对于智能系统的可部署性至关重要,并解释某些决定“良好”和有些决定不是实现这种透明度的途径的原因。我们考虑两种决策变式,即“良好”决定相当于替代方案,(一) 达到“最”的目标,(二) 达到“最偏好”的目标。然后,我们为每一种变式和“良好”概念(对应文献中的一些现有概念)下定义“良好”概念(用两种格式解释,为选择不同需要和权限的受众选择替代方案提供理由:从达到的目标的角度和“良好”概念的简单解释、备选决定以及反映导致选择的决策过程的论证性解释,同时与精准解释相对应。我们用基于假设的论证(ABA)这一众所周知的有条理的论证形式界定“良好”决定是可以接受的,并从制裁这一可受理性的树木中引出有理有据的解释。最后,我们即时将总体框架纳入可解释的决策框架,将无法解释性决策条款纳入可计量的目标与不可靠的决策关系。