Statisticians often face the choice between using probability models or a paradigm defined by minimising a loss function. Both approaches are useful and, if the loss can be re-cast into a proper probability model, there are many tools to decide which model or loss is more appropriate for the observed data, in the sense of explaining the data's nature. However, when the loss leads to an improper model, there are no principled ways to guide this choice. We address this task by combining the Hyv\"arinen score, which naturally targets infinitesimal relative probabilities, and general Bayesian updating, which provides a unifying framework for inference on losses and models. Specifically we propose the H-score, a general Bayesian selection criterion and prove that it consistently selects the (possibly improper) model closest to the data-generating truth in Fisher's divergence. We also prove that an associated H-posterior consistently learns optimal hyper-parameters featuring in loss functions, including a challenging tempering parameter in generalised Bayesian inference. As salient examples, we consider robust regression and non-parametric density estimation where popular loss functions define improper models for the data and hence cannot be dealt with using standard model selection tools. These examples illustrate advantages in robustness-efficiency trade-offs and provide a Bayesian implementation for kernel density estimation, opening a new avenue for Bayesian non-parametrics.
翻译:使用概率模型或以最小化损失函数为最小化定义的范式,统计学家往往在使用概率模型或范式之间做出选择。两种方法都是有用的,如果损失可以重新写成适当的概率模型,那么,从解释数据性质的角度,有许多工具可以决定哪些模型或损失更适合观察到的数据。然而,当损失导致不适当的模型时,没有原则性的方法来指导这一选择。我们通过将Hyv\'arinen评分(Hyv\'amarinnen 评分(Hyv\'arinen)和一般Bayesian更新(Bayesian)相结合来应对这一任务,后者自然目标是极小的相对概率,而Bayesian的更新为损失和模型的推断提供了一个统一框架。具体地,我们建议采用H-countreal、通用的Bayesaysian选择标准性标准性估算标准性(可能不适当)模型来解释标准性损失率。我们还证明一个相关的H-efferferferference 的模型不能用来解释标准性估算标准性模型和标准性损失模型。