We study generalized Bayesian inference under misspecification, i.e. when the model is 'wrong but useful'. Generalized Bayes equips the likelihood with a learning rate $\eta$. We show that for generalized linear models (GLMs), $\eta$-generalized Bayes concentrates around the best approximation of the truth within the model for specific $\eta \neq 1$, even under severely misspecified noise, as long as the tails of the true distribution are exponential. We derive MCMC samplers for generalized Bayesian lasso and logistic regression and give examples of both simulated and real-world data in which generalized Bayes substantially outperforms standard Bayes.
翻译:我们根据错误的分类,即当模型“错误但有用”时,研究贝叶斯普遍推论。通用的贝叶斯为学习率提供了可能性。我们显示,对于通用的线性模型(GLM),美元一般化的贝叶斯在特定 $\eta\neq 1 美元模型中围绕事实的最佳近似,即使是在严重错误描述的噪音下,只要真实分布的尾巴是指数化的。我们为通用的贝叶斯拉索和后勤回归提取了MCMCMC采样器,并举模拟数据和实际世界数据为例,而普通的贝叶斯大大优于标准湾。