Dynamic neural networks are a recent technique that promises a remedy for the increasing size of modern deep learning models by dynamically adapting their computational cost to the difficulty of the input samples. In this way, the model can adjust to a limited computational budget. However, the poor quality of uncertainty estimates in deep learning models makes it difficult to distinguish between hard and easy samples. To address this challenge, we present a computationally efficient approach for post-hoc uncertainty quantification in dynamic neural networks. We show that adequately quantifying and accounting for both aleatoric and epistemic uncertainty through a probabilistic treatment of the last layers improves the predictive performance and aids decision-making when determining the computational budget. In the experiments, we show improvements on CIFAR-100 and ImageNet in terms of accuracy, capturing uncertainty, and calibration error.
翻译:动态神经网络是一种最新技术,它通过动态调整其计算成本以适应输入样本的难度,为日益扩大的现代深层学习模型提供了一种补救措施。这样,该模型可以适应有限的计算预算。然而,深层学习模型的不确定性估计质量差,难以区分硬性和易用样本。为了应对这一挑战,我们在动态神经网络中提出了一种计算高效的方法,用于对热后不确定性进行量化。我们表明,通过对最后一层进行概率化的处理,充分量化和核算疏漏和感知性不确定性,可以提高预测性能,并在计算预算时帮助决策。在实验中,我们在CIRA-100和图像网络中显示了在准确性、捕捉不确定性和校准错误方面的改进。</s>