Deep Ensembles, as a type of Bayesian Neural Networks, can be used to estimate uncertainty on the prediction of multiple neural networks by collecting votes from each network and computing the difference in those predictions. In this paper, we introduce a method for uncertainty estimation that considers a set of independent categorical distributions for each layer of the network, giving many more possible samples with overlapped layers than in the regular Deep Ensembles. We further introduce an optimized inference procedure that reuses common layer outputs, achieving up to 19x speed up and reducing memory usage quadratically. We also show that the method can be further improved by ranking samples, resulting in models that require less memory and time to run while achieving higher uncertainty quality than Deep Ensembles.
翻译:深海群集作为一种类型的巴伊西亚神经网络,可以用来通过收集每个网络的选票和计算这些预测的差别来估计预测多个神经网络的不确定性。在本文中,我们引入了一种不确定性估算方法,考虑网络每一层的一套独立的绝对分布,比正常的深海群集中多得多的与层层重叠的样本。我们进一步引入一种优化推论程序,重新利用共同层输出,达到19x速度,并类比减少记忆使用。我们还表明,该方法可以通过排序样本得到进一步的改进,从而产生模型,在达到比深海群集更高的不确定性质量的同时,需要较少的记忆和时间运行。