We develop scalable methods for producing conformal Bayesian predictive intervals with finite sample calibration guarantees. Bayesian posterior predictive distributions, $p(y \mid x)$, characterize subjective beliefs on outcomes of interest, $y$, conditional on predictors, $x$. Bayesian prediction is well-calibrated when the model is true, but the predictive intervals may exhibit poor empirical coverage when the model is misspecified, under the so called ${\cal{M}}$-open perspective. In contrast, conformal inference provides finite sample frequentist guarantees on predictive confidence intervals without the requirement of model fidelity. Using 'add-one-in' importance sampling, we show that conformal Bayesian predictive intervals are efficiently obtained from re-weighted posterior samples of model parameters. Our approach contrasts with existing conformal methods that require expensive refitting of models or data-splitting to achieve computational efficiency. We demonstrate the utility on a range of examples including extensions to partially exchangeable settings such as hierarchical models.
翻译:我们开发了可缩放的方法,用有限的抽样校准保证来制作符合规定的贝叶西亚预测间隔。 贝叶西亚的后方预测分布, $p(y \ mid x), $(y \ mid x) 美元, 将有关结果的主观信念定性为$(y), $(y), 以预测值为条件, $(x) 美元。 当模型正确时, 贝叶西亚的预测是完全校准的, 但预测间隔在模型被错误描述时, 以所谓的${cal{M ⁇ $( $- open- open propen 角度, 预测间隔可能会显示经验覆盖面较差 。 相反, 一致的推断为预测间隔提供了有限的样本, 且不要求模型忠诚, 使用“ addadd- one- in ” 重要性抽样, 我们显示从重标定的波亚西亚的模型样本中有效地获得了一致的预测间隔。 我们的方法与现有的校准方法不同,, 需要昂贵的模型或数据分解以达到计算效率。 我们展示了一系列例子的效用,, 包括扩展到部分可交换环境, 如等级模型。