We present a general framework for evaluating image counterfactuals. The power and flexibility of deep generative models make them valuable tools for learning mechanisms in structural causal models. However, their flexibility makes counterfactual identifiability impossible in the general case. Motivated by these issues, we revisit Pearl's axiomatic definition of counterfactuals to determine the necessary constraints of any counterfactual inference model: composition, reversibility, and effectiveness. We frame counterfactuals as functions of an input variable, its parents, and counterfactual parents and use the axiomatic constraints to restrict the set of functions that could represent the counterfactual, thus deriving distance metrics between the approximate and ideal functions. We demonstrate how these metrics can be used to compare and choose between different approximate counterfactual inference models and to provide insight into a model's shortcomings and trade-offs.
翻译:我们提出了一个用于评价图像反事实的一般框架。深层基因模型的力量和灵活性使它们成为结构因果模型中学习机制的宝贵工具。然而,由于这些模型的灵活性,在一般情况下无法反事实的识别性。受这些问题的驱使,我们重新审视珍珠对反事实的不言自明定义,以确定任何反事实推断模型(组成、可逆性和有效性)的必要限制。我们把反事实作为输入变量、其父母和反事实父母的功能来设置。我们用逻辑限制能够代表反事实的功能集,从而得出近似功能和理想功能之间的距离指标。我们展示如何利用这些参数来比较和选择不同的近似反事实推断模型,并深入了解模型的缺陷和权衡。</s>