Ensembles improve prediction performance and allow uncertainty quantification by aggregating predictions from multiple models. In deep ensembling, the individual models are usually black box neural networks, or recently, partially interpretable semi-structured deep transformation models. However, interpretability of the ensemble members is generally lost upon aggregation. This is a crucial drawback of deep ensembles in high-stake decision fields, in which interpretable models are desired. We propose a novel transformation ensemble which aggregates probabilistic predictions with the guarantee to preserve interpretability and yield uniformly better predictions than the ensemble members on average. Transformation ensembles are tailored towards interpretable deep transformation models but are applicable to a wider range of probabilistic neural networks. In experiments on several publicly available data sets, we demonstrate that transformation ensembles perform on par with classical deep ensembles in terms of prediction performance, discrimination, and calibration. In addition, we demonstrate how transformation ensembles quantify both aleatoric and epistemic uncertainty, and produce minimax optimal predictions under certain conditions.
翻译:组合组合可以提高预测性能,并通过综合多种模型的预测进行不确定性的量化。 在深层组合中,个体模型通常是黑盒神经网络,或者最近,部分可解释的半结构深层变异模型。 但是,组合成员的解释性一般在聚合时消失。这是高镜头决策领域深度组合的关键缺陷,在高镜头决策领域,需要可解释模型。我们建议一种新型变异组合,将概率预测汇总起来,保证保持可解释性,并产生比共同成员平均一致的更好预测。变异组合是针对可解释的深层变异模型定制的,但适用于范围更广范围的概率神经网络。在几个公开的数据集实验中,我们证明变异组合在预测性能、歧视和校准方面与古典的深海组合相当。此外,我们演示了变异组合如何在一定条件下量化收缩和缩略图不确定性,并产生微缩数的最佳预测。