Bayesian model comparison (BMC) offers a principled probabilistic approach to study and rank competing models. In standard BMC, we construct a discrete probability distribution over the set of possible models, conditional on the observed data of interest. These posterior model probabilities (PMPs) are measures of uncertainty, but -- when derived from a finite number of observations -- are also uncertain themselves. In this paper, we conceptualize distinct levels of uncertainty which arise in BMC. We explore a fully probabilistic framework for quantifying meta-uncertainty, resulting in an applied method to enhance any BMC workflow. Drawing on both Bayesian and frequentist techniques, we represent the uncertainty over the uncertain PMPs via meta-models which combine simulated and observed data into a predictive distribution for PMPs on new data. We demonstrate the utility of the proposed method in the context of conjugate Bayesian regression, likelihood-based inference with Markov chain Monte Carlo, and simulation-based inference with neural networks.
翻译:贝叶斯模型比较(BMC)提供了一种原则性的概率比较方法,用于研究和排列相互竞争的模式。在标准的BMC中,我们根据观察到的受关注数据,在一套可能的模型上建立一个独立的概率分布。这些后代模型概率(PMP)是不确定性的衡量标准,但从有限数量的观察中得出的,本身也不确定。在本文中,我们构思了BMC中产生的不同程度的不确定性。我们探索了一种完全概率框架,用于量化元不确定性,从而采用一种应用的方法加强BMC的任何工作流程。我们借助了巴伊斯和常客技术,通过将模拟和观察到的数据结合到新数据中PMP的预测分布的元模型,我们代表了不确定的PMP的不确定性。我们展示了拟议方法在对Bayesian conjugate回归背景下的效用,与Markov链 Monte Carlo基于可能性的推论,以及与神经网络的模拟推论。