Bayesian model comparison (BMC) offers a principled probabilistic approach to study and rank competing models. In standard BMC, we construct a discrete probability distribution over the set of possible models, conditional on the observed data of interest. These posterior model probabilities (PMPs) are measures of uncertainty, but, when derived from a finite number of observations, are also uncertain themselves. In this paper, we conceptualize distinct levels of uncertainty which arise in BMC. We explore a fully probabilistic framework for quantifying meta-uncertainty, resulting in an applied method to enhance any BMC workflow. Drawing on both Bayesian and frequentist techniques, we represent the uncertainty over the uncertain PMPs via meta-models which combine simulated and observed data into a predictive distribution for PMPs on new data. We demonstrate the utility of the proposed method in the context of conjugate Bayesian regression, likelihood-based inference with Markov chain Monte Carlo, and simulation-based inference with neural networks.
翻译:贝叶斯模型比较(BMC)提供了一种原则性的概率比较方法,用于研究和排序竞争模型。在标准的BMC中,我们根据观察到的感兴趣数据,在一套可能的模型上建立一个独立的概率分布。这些后代模型概率(PMPs)是不确定性的衡量标准,但从有限数量的观测中得出,本身也不确定。在本文中,我们构思了BMC中出现的不同程度的不确定性。我们探索了一种完全概率框架,用于量化元不确定性,从而采用一种应用的方法加强BMC的任何工作流程。我们借助了巴伊斯和经常使用的技术,通过将模拟和观察数据合并成新数据中PMPs预测分布的元模型,我们代表了不确定的PMPs的不确定性。我们展示了拟议方法在对Bayesian conjugate回归背景下的效用,与Markov 链 Monte Carlo的概率推论,以及与神经网络的模拟推论。