The aim of this paper is to propose a suitable method for constructing prediction intervals for the output of neural network models. To do this, we adapt the extremely randomized trees method originally developed for random forests to construct ensembles of neural networks. The extra-randomness introduced in the ensemble reduces the variance of the predictions and yields gains in out-of-sample accuracy. An extensive Monte Carlo simulation exercise shows the good performance of this novel method for constructing prediction intervals in terms of coverage probability and mean square prediction error. This approach is superior to state-of-the-art methods extant in the literature such as the widely used MC dropout and bootstrap procedures. The out-of-sample accuracy of the novel algorithm is further evaluated using experimental settings already adopted in the literature.
翻译:本文的目的是提出一种适当的方法,用于构建神经网络模型输出的预测间隔。 为此,我们调整了最初为随机森林开发的极端随机化的树木方法,以构建神经网络的集合。共同体中引入的超随机性减少了预测和产量增益的异常准确性差异。一个内容广泛的蒙特卡洛模拟演练展示了这一在覆盖概率和平均平方预测误差方面构建预测间隔的新颖方法的良好表现。这一方法优于文献中最先进的方法,如广泛使用的MC丢弃和靴套程序。新奇算法的外标本准确性,还利用文献中已经采用的实验环境进行了进一步评估。