The prior distribution on parameters of a sampling distribution is the usual starting point for Bayesian uncertainty quantification. In this paper, we present a different perspective which focuses on missing observations as the source of statistical uncertainty, with the parameter of interest being known precisely given the entire population. We argue that the foundation of Bayesian inference is to assign a distribution on missing observations conditional on what has been observed. In the conditionally i.i.d. setting with an observed sample of size $n$, the Bayesian would thus assign a predictive distribution on the missing $Y_{n+1:\infty}$ conditional on $Y_{1:n}$, which then induces a distribution on the parameter. Demonstrating an application of martingales, Doob shows that choosing the Bayesian predictive distribution returns the conventional posterior as the distribution of the parameter. Taking this as our cue, we relax the predictive machine, avoiding the need for the predictive to be derived solely from the usual prior to posterior to predictive density formula. We introduce the \textit{martingale posterior distribution}, which returns Bayesian uncertainty directly on any statistic of interest without the need for the likelihood and prior, and this distribution can be sampled through a computational scheme we name \textit{predictive resampling}. To that end, we introduce new predictive methodologies for multivariate density estimation, regression and classification that build upon recent work on bivariate copulas.
翻译:抽样分布参数的先前分布是巴耶斯州不确定性量化的通常起点。 在本文中, 我们呈现了一个不同的视角, 侧重于缺失的观察作为统计不确定性的源头, 而关注的参数被精确地告知整个人口。 我们争论说, 巴耶斯州推论的基础是根据所观察到的参数对缺失的观测进行分配。 在有条件的 i. d. 设置一个观察到的大小为$的样本时, 巴耶斯州将因此对缺失的 $Y+1:\infty} 美元进行预测性分布, 以 $Y+1: n} 为条件, 以缺失的观察作为统计不确定性的来源, 从而引发参数的分布。 演示马丁基亚州参数的应用, Doob 表明, 选择巴耶斯州预测分布将传统后缀作为参数的分布条件。 作为我们的提示, 我们放松了预测机器, 避免仅需要从通常的 icostorate bior adbrial 公式中得出预测性 。 我们引入了 liteit{marting adticle adtial restial restial restial labilding rotition ladeal lade roview