In this paper, we study the learning rate of generalized Bayes estimators in a general setting where the hypothesis class can be uncountable and have an irregular shape, the loss function can have heavy tails, and the optimal hypothesis may not be unique. We prove that under the multi-scale Bernstein's condition, the generalized posterior distribution concentrates around the set of optimal hypotheses and the generalized Bayes estimator can achieve fast learning rate. Our results are applied to show that the standard Bayesian linear regression is robust to heavy-tailed distributions.
翻译:在本文中,我们研究了普通贝耶斯测算员在一般环境中的学习率,假设等级可能无法计算,并且有不正常的形状,损失功能可能具有沉重的尾巴,而最佳假设可能并非独一无二。 我们证明,在多级伯恩斯坦条件下,普遍后方分布集中在一套最佳假设上,而普遍贝耶斯测算员可以实现快速学习率。 我们的结果表明,标准的巴耶斯线性回归对于重度分布是强大的。