The concept of conditional computation for deep nets has been proposed previously to improve model performance by selectively using only parts of the model conditioned on the sample it is processing. In this paper, we investigate input-dependent dynamic filter selection in deep convolutional neural networks (CNNs). The problem is interesting because the idea of forcing different parts of the model to learn from different types of samples may help us acquire better filters in CNNs, improve the model generalization performance and potentially increase the interpretability of model behavior. We propose a novel yet simple framework called GaterNet, which involves a backbone and a gater network. The backbone network is a regular CNN that performs the major computation needed for making a prediction, while a global gater network is introduced to generate binary gates for selectively activating filters in the backbone network based on each input. Extensive experiments on CIFAR and ImageNet datasets show that our models consistently outperform the original models with a large margin. On CIFAR-10, our model also improves upon state-of-the-art results.
翻译:以前曾提出深网有条件计算概念,以改善模型性能,只选择使用以其处理的样本为条件的模型的某些部分。在本文中,我们调查深层进化神经网络(CNNs)中基于投入的动态过滤选择。问题很有意思,因为强迫模型不同部分从不同类型样本中学习的想法可能有助于我们在CNN中获取更好的过滤器,改进模型的概括性表现,并有可能提高模型行为的可解释性。我们提议了一个新颖而简单的框架,称为GaterNet,它涉及一个骨干和一个门式网络。主干网络是一个常规CNN,它进行预测所需的主要计算,同时引入全球门网,以产生二进制门,以便根据每个输入在主干网中选择性地启动过滤器。关于CIFAR和图像网络数据集的广泛实验表明,我们的模型始终大大超越原始模型。关于CIFAR-10,我们的模型还改进了最新结果。