PAC-Bayesian bounds are known to be tight and informative when studying the generalization ability of randomized classifiers. However, when applied to some family of deterministic models such as neural networks, they require a loose and costly derandomization step. As an alternative to this step, we introduce three new PAC-Bayesian generalization bounds that have the originality to be pointwise, meaning that they provide guarantees over one single hypothesis instead of the usual averaged analysis. Our bounds are rather general, potentially parameterizable, and provide novel insights for various machine learning settings that rely on randomized algorithms. We illustrate the interest of our theoretical result for the analysis of neural network training.
翻译:PAC-Bayesian的界限在研究随机分类器的通用能力时已知是紧密和内容丰富的,然而,当应用于神经网络等确定型模型的某些大家庭时,它们要求采取松散和昂贵的去随机化步骤。作为这一步骤的替代办法,我们引入了三个新的PAC-Bayesian通用界限,这些界限的原创性是尖锐的,这意味着它们为单一假设提供了保障,而不是通常的平均值分析。我们的界限相当笼统,可能具有可参数性,并为依赖随机化算法的各种机器学习环境提供了新的洞察力。我们展示了我们对分析神经网络培训的理论结果。