PAC-Bayes is a useful framework for deriving generalization bounds which was introduced by McAllester ('98). This framework has the flexibility of deriving distribution- and algorithm-dependent bounds, which are often tighter than VC-related uniform convergence bounds. In this manuscript we present a limitation for the PAC-Bayes framework. We demonstrate an easy learning task that is not amenable to a PAC-Bayes analysis. Specifically, we consider the task of linear classification in 1D; it is well-known that this task is learnable using just $O(\log(1/\delta)/\epsilon)$ examples. On the other hand, we show that this fact can not be proved using a PAC-Bayes analysis: for any algorithm that learns 1-dimensional linear classifiers there exists a (realizable) distribution for which the PAC-Bayes bound is arbitrarily large.
翻译:PAC-Bayes是McAllester ('98) 引入的一般性界限的有用框架。 这个框架具有产生分布和算法依赖的界限的灵活性,这些界限往往比与VC有关的统一趋同界限更为紧。 在这个手稿中,我们对PAC-Bayes框架提出了限制。 我们展示了一个不适于PAC-Bayes分析的简单学习任务。 具体地说,我们认为线性分类任务在1D中是可行的; 众所周知, 仅用$O( log( 1/ delta) /\ epsilon) 的例子就可以学习这一任务。 另一方面, 我们表明,不能用PAC- Bayes 分析来证明这一事实: 任何学习一维线性分类的算法都存在( 可实现的)分布,PAC- Bayes 界是任意很大的。