To interpret uncertainty estimates from differentiable probabilistic models, recent work has proposed generating a single Counterfactual Latent Uncertainty Explanation (CLUE) for a given data point where the model is uncertain, identifying a single, on-manifold change to the input such that the model becomes more certain in its prediction. We broaden the exploration to examine $\delta$-CLUE, the set of potential CLUEs within a $\delta$ ball of the original input in latent space. We study the diversity of such sets and find that many CLUEs are redundant; as such, we propose DIVerse CLUE ($\nabla$-CLUE), a set of CLUEs which each propose a distinct explanation as to how one can decrease the uncertainty associated with an input. We then further propose GLobal AMortised CLUE (GLAM-CLUE), a distinct and novel method which learns amortised mappings on specific groups of uncertain inputs, taking them and efficiently transforming them in a single function call into inputs for which a model will be certain. Our experiments show that $\delta$-CLUE, $\nabla$-CLUE, and GLAM-CLUE all address shortcomings of CLUE and provide beneficial explanations of uncertainty estimates to practitioners.
翻译:为了解释不同概率模型的不确定性估计,最近的工作提议为模型不确定的某个数据点生成一个单一的反现实延迟不确定性解释(CLUE),确定对输入的单一、一流的修改,使模型的预测更加确定。我们扩大勘探范围,在原始输入的潜质空间中以美元为单位的一球内考察潜在CLUE(GLAM-CLUE),这是一套独特和新颖的方法,可以学习对不确定投入的具体组别进行和解式绘图,在单个功能中将这些组别取而代之,并有效地将其转换成投入,而模型中将使用美元(NAbla$-CLUE),一套CLUE,每个实验都提出明确的解释,说明如何减少与输入有关的不确定性。我们又进一步提议Global AMortized CLUE(GLAM-CLUE),这是一种独特的方法,可以学习对不确定投入的具体组群别进行和解的制图,在单个功能中将这些组别,并有效地将其转换成一个输入,其投入将具有一定价值的模型。我们的实验显示GDLUE-LUEUE的所有缺点。