The willingness to trust predictions formulated by automatic algorithms is key in a vast number of domains. However, a vast number of deep architectures are only able to formulate predictions without an associated uncertainty. In this paper, we propose a method to convert a standard neural network into a Bayesian neural network and estimate the variability of predictions by sampling different networks similar to the original one at each forward pass. We couple our methods with a tunable rejection-based approach that employs only the fraction of the dataset that the model is able to classify with an uncertainty below a user-set threshold. We test our model in a large cohort of brain images from Alzheimer's Disease patients, where we tackle discrimination of patients from healthy controls based on morphometric images only. We demonstrate how combining the estimated uncertainty with a rejection-based approach increases classification accuracy from 0.86 to 0.95 while retaining 75% of the test set. In addition, the model can select cases to be recommended for manual evaluation based on excessive uncertainty. We believe that being able to estimate the uncertainty of a prediction, along with tools that can modulate the behavior of the network to a degree of confidence that the user is informed about (and comfortable with) can represent a crucial step in the direction of user compliance and easier integration of deep learning tools into everyday tasks currently performed by human operators.
翻译:相信由自动算法作出的预测的意愿是众多领域的关键。 然而, 大量深层建筑只能在没有相关不确定性的情况下作出预测。 在本文中, 我们提出一种方法, 将标准神经网络转换成贝叶斯神经网络, 并估计不同网络的预测的变异性, 类似每个远道的原始网络。 我们将我们的方法与只使用该模型能够分类的、 低于用户设定阈值的不确定性的数据集的一小部分的金枪鱼分量的基于拒绝的方法相提并论。 我们用大量来自阿尔茨海默氏病病人的大脑图像来测试我们的模型, 我们在这个模型中, 我们只处理对病人进行基于光度图像的健康控制的歧视。 我们演示如何将估计的不确定性与基于拒绝的方法结合起来, 将分类的准确性从0. 86 提高到 0. 95, 同时保留75%的测试数据集。 此外, 该模型可以选择建议基于过度不确定性进行手工评估的案例。 我们相信, 能够评估预测的不确定性, 以及能够将网络行为调整到一个程度的工具, 来解决病人在光谱图像上受到歧视的病人受到的歧视。 我们证明, 用户能够以更清楚地了解当前的关键的学习工具。