Deep neural networks (DNNs) have successfully learned useful data representations in various tasks, however, assessing the reliability of these representations remains a challenge. Deep Ensemble is widely considered the state-of-the-art method for uncertainty estimation, but it is very expensive to train and test. MC-Dropout is another alternative method, which is less expensive but lacks the diversity of predictions. To get more diverse predictions in less time, we introduce Randomized ReLU Activation (RRA) framework. Under the framework, we propose two strategies, MC-DropReLU and MC-RReLU, to estimate uncertainty. Instead of randomly dropping some neurons of the network as in MC-Dropout, the RRA framework adds randomness to the activation function module, making the outputs diverse. As far as we know, this is the first attempt to add randomness to the activation function module to generate predictive uncertainty. We analyze and compare the output diversity of MC-Dropout and our method from the variance perspective and obtain the relationship between the hyperparameters and output diversity in the two methods. Moreover, our method is simple to implement and does not need to modify the existing model. We experimentally validate the RRA framework on three widely used datasets, CIFAR10, CIFAR100, and TinyImageNet. The experiments demonstrate that our method has competitive performance but is more favorable in training time and memory requirements.
翻译:深心神经网络(DNNS)在各种任务中成功地学到了有用的数据表述,然而,评估这些表述的可靠性仍是一项挑战。深度组合被广泛视为最先进的不确定性估计方法,但培训和测试费用非常昂贵。MC-Dropout是另一种替代方法,其成本较低,但缺乏预测的多样性。为了在较少的时间里获得更加多样的预测,我们引入了随机的ReLU激活框架。在这个框架内,我们从差异角度分析和比较了MC-DropReLU和MC-RRELU的产出多样性。我们提出了两种战略,即MC-DropREU和MC-RRELU,以估计不确定性。在M-Dropout中,RRA框架为网络中随机丢弃了一些神经神经元,但非常昂贵的培训和产出多样性。此外,在激活功能功能模块中,RRA框架增加了随机随机性,而这是我们第一次尝试在激活功能模块中添加随机性来产生预测不确定性。我们从差异的角度分析和比较了MC-ROproproduction 和我们的方法,我们从可变的模型与M-ReLLLLU之间获得了关系,但在两个实验框架中,我们使用的方法是简单的方法。我们使用的方法是用来来验证。我们使用。我们使用的方法是简单的方法。我们使用的方法,而不用的方法是用来来验证。我们用来来验证。