We introduce the matrix-based Renyi's $\alpha$-order entropy functional to parameterize Tishby et al. information bottleneck (IB) principle with a neural network. We term our methodology Deep Deterministic Information Bottleneck (DIB), as it avoids variational inference and distribution assumption. We show that deep neural networks trained with DIB outperform the variational objective counterpart and those that are trained with other forms of regularization, in terms of generalization performance and robustness to adversarial attack.Code available at https://github.com/yuxi120407/DIB
翻译:我们采用基于矩阵的Renyi 的 $\ alpha$-serve entropy 功能,将 Tishby 等人的信息瓶颈(IB) 原则与神经网络进行参数化。我们用我们的方法“深确定性信息瓶颈(DIB) ” 来形容,因为它避免了变化的推论和分布假设。我们表明,在DIB培训的深神经网络在一般性能和对对抗性攻击的稳健性方面,优于变异性目标对等网络和受过其他形式正规化培训的神经网络。可查阅https://github.com/yusi120407/DIB。