Aleatoric uncertainty quantification seeks for distributional knowledge of random responses, which is important for reliability analysis and robustness improvement in machine learning applications. Previous research on aleatoric uncertainty estimation mainly targets closed-formed conditional densities or variances, which requires strong restrictions on the data distribution or dimensionality. To overcome these restrictions, we study conditional generative models for aleatoric uncertainty estimation. We introduce two metrics to measure the discrepancy between two conditional distributions that suit these models. Both metrics can be easily and unbiasedly computed via Monte Carlo simulation of the conditional generative models, thus facilitating their evaluation and training. We demonstrate numerically how our metrics provide correct measurements of conditional distributional discrepancies and can be used to train conditional models competitive against existing benchmarks.
翻译:为了克服这些限制,我们研究对随机反应的定性模型。我们采用两个衡量标准来衡量适合这些模型的两种有条件分布之间的差异。两个衡量标准都可以通过蒙特卡洛模拟有条件基因化模型来容易和不偏不倚地计算,从而方便评估和培训它们。我们从数字上表明我们的衡量标准如何提供对有条件分布差异的正确衡量标准,并且可以用来对有条件分布差异进行与现有基准有竞争力的培训。