Explainability methods for NLP systems encounter a version of the fundamental problem of causal inference: for a given ground-truth input text, we never truly observe the counterfactual texts necessary for isolating the causal effects of model representations on outputs. In response, many explainability methods make no use of counterfactual texts, assuming they will be unavailable. In this paper, we show that robust causal explainability methods can be created using approximate counterfactuals, which can be written by humans to approximate a specific counterfactual or simply sampled using metadata-guided heuristics. The core of our proposal is the Causal Proxy Model (CPM). A CPM explains a black-box model $\mathcal{N}$ because it is trained to have the same actual input/output behavior as $\mathcal{N}$ while creating neural representations that can be intervened upon to simulate the counterfactual input/output behavior of $\mathcal{N}$. Furthermore, we show that the best CPM for $\mathcal{N}$ performs comparably to $\mathcal{N}$ in making factual predictions, which means that the CPM can simply replace $\mathcal{N}$, leading to more explainable deployed models. Our code is available at https://github.com/frankaging/Causal-Proxy-Model.
翻译:NLP 系统的可解释性方法遇到了一种因果推断的根本问题: 对于给定的地面真相输入文本, 我们从未真正观察到分离模型演示对产出的因果关系影响所需的反事实文本。 作为回应, 许多可解释性方法没有使用反事实文本, 假设这些文本是不存在的。 在本文中, 我们显示, 可以用近似反事实来创建稳健的因果关系解释方法, 它可以由人类用元数据引导的超光谱写成, 以近似特定的反事实或简单抽样。 我们提案的核心是 Causal Proxy 模型( CPM ) 。 一个 CPM 解释黑盒模型 $\ mathcal{N}, 因为它受过训练, 其实际输入/ 输出行为与 $\ mathcal{N} 相同, 同时可以创建神经表达器, 模拟 $\ fmath{N} 的反事实输入/ 行为。 此外, 我们显示, $\ mathcal{N} 最佳的 CMPM $ 核心 核心是, 能够更精确地解释 $\\\\\ greabex paldeals to ex pal sakes to lauding des cal