PAC-Bayesian bounds are known to be tight and informative when studying the generalization ability of randomized classifiers. However, when applied to some family of deterministic models such as neural networks, they require a loose and costly derandomization step. As an alternative to this step, we introduce new PAC-Bayesian generalization bounds that have the originality to provide disintegrated bounds, i.e., they give guarantees over one single hypothesis instead of the usual averaged analysis. Our bounds are easily optimizable and can be used to design learning algorithms. We illustrate the interest of our result on neural networks and show a significant practical improvement over the state-of-the-art framework.
翻译:PAC-Bayesian的界限在研究随机分类器的通用能力时已知是紧紧和内容丰富的,然而,当应用于神经网络等某些确定型模型的大家庭时,它们要求采取松散和昂贵的去随机化步骤。作为这一步骤的替代办法,我们引入了新的PAC-Bayesian的通用界限,这些界限的原始性是提供分解的界限,即它们为单一假设提供担保,而不是通常的平均分析。我们的界限很容易优化,可用于设计学习算法。我们展示了我们对神经网络的结果的兴趣,并展示了对最新框架的重大实际改进。