We propose a novel information bottleneck (IB) method named Drop-Bottleneck, which discretely drops features that are irrelevant to the target variable. Drop-Bottleneck not only enjoys a simple and tractable compression objective but also additionally provides a deterministic compressed representation of the input variable, which is useful for inference tasks that require consistent representation. Moreover, it can jointly learn a feature extractor and select features considering each feature dimension's relevance to the target task, which is unattainable by most neural network-based IB methods. We propose an exploration method based on Drop-Bottleneck for reinforcement learning tasks. In a multitude of noisy and reward sparse maze navigation tasks in VizDoom (Kempka et al., 2016) and DMLab (Beattie et al., 2016), our exploration method achieves state-of-the-art performance. As a new IB framework, we demonstrate that Drop-Bottleneck outperforms Variational Information Bottleneck (VIB) (Alemi et al., 2017) in multiple aspects including adversarial robustness and dimensionality reduction.
翻译:我们建议一种名为“低压”的新信息瓶颈(IB)方法,它与目标变量无关,具有与目标变量无关的离散下降特性。“低压”不仅享有简单且可移动的压缩目标,而且还提供了输入变量的确定性压缩代表,这有助于得出需要一致代表的推断任务。此外,它可以联合学习一个特征提取器,并选择一些特征特征,考虑到每个特征层面与目标任务的相关性,这是大多数基于神经网络的IB方法都无法达到的。我们建议一种基于“低压”的探索方法,用于强化学习任务。在Viz-Doom(Kempka等人,2016年)和DMLab(Beattie等人,2016年)的众多噪音和奖励稀有的迷你带导航任务中,我们的探索方法达到了最新的业绩。作为一个新的 IB框架,我们证明,在多个方面,包括对抗性能和尺寸降低。