Uncertainty quantification in a neural network is one of the most discussed topics for safety-critical applications. Though Neural Networks (NNs) have achieved state-of-the-art performance for many applications, they still provide unreliable point predictions, which lack information about uncertainty estimates. Among various methods to enable neural networks to estimate uncertainty, Monte Carlo (MC) dropout has gained much popularity in a short period due to its simplicity. In this study, we present a new version of the traditional dropout layer where we are able to fix the number of dropout configurations. As such, each layer can take and apply the new dropout layer in the MC method to quantify the uncertainty associated with NN predictions. We conduct experiments on both toy and realistic datasets and compare the results with the MC method using the traditional dropout layer. Performance analysis utilizing uncertainty evaluation metrics corroborates that our dropout layer offers better performance in most cases.
翻译:神经网络中的不确定性量化是讨论最多的安全关键应用课题之一。虽然神经网络在许多应用中取得了最先进的性能,但它们仍然提供不可靠的点预测,缺乏关于不确定性估计的信息。在使神经网络能够估计不确定性的各种方法中,蒙特卡洛(MC)的辍学现象因其简单性在很短的时间内受到广泛欢迎。在这项研究中,我们提出了一个新版本的传统辍学层,我们可以在其中确定辍学配置的数量。因此,每个层都可以采用和运用MC方法中的新的辍学层来量化与NNN预测有关的不确定性。我们在玩具和现实数据集方面进行实验,并利用传统的辍学层将结果与MC方法进行比较。绩效分析利用不确定性评估指标证实,我们的辍学层在大多数情况下表现更好。