We discuss three issues with a proposed solution to extract aleatoric and epistemic model uncertainty from regression-based neural networks (NN). The aforementioned proposal derives a technique by placing evidential priors over the original Gaussian likelihood function and training the NN to infer the hyperparemters of the evidential distribution. Doing so allows for the simultaneous extraction of both uncertainties without sampling or utilization of out-of-distribution data for univariate regression tasks. We describe our issues in detail, give a possible solution and generalize the technique for the multivariate case.
翻译:我们讨论了三个问题,并提出了一个拟议解决办法,从基于回归的神经网络中提取偏移和偏移模型的不确定性。 上述提案提出了一种技术,将证据前科置于原高山概率功能之上,并培训NN,以推断证据分布的超分层。这样可以同时提取不确定性,而无需取样或利用分配外数据来完成单向回归任务。我们详细描述了我们的问题,为多变情况提供了可能的解决办法,并概括了技术。