The search for efficient, sparse deep neural network models is most prominently performed by pruning: training a dense, overparameterized network and removing parameters, usually via following a manually-crafted heuristic. Additionally, the recent Lottery Ticket Hypothesis conjectures that, for a typically-sized neural network, it is possible to find small sub-networks which, when trained from scratch on a comparable budget, match the performance of the original dense counterpart. We revisit fundamental aspects of pruning algorithms, pointing out missing ingredients in previous approaches, and develop a method, Continuous Sparsification, which searches for sparse networks based on a novel approximation of an intractable $\ell_0$ regularization. We compare against dominant heuristic-based methods on pruning as well as ticket search -- finding sparse subnetworks that can be successfully re-trained from an early iterate. Empirical results show that we surpass the state-of-the-art for both objectives, across models and datasets, including VGG trained on CIFAR-10 and ResNet-50 trained on ImageNet. In addition to setting a new standard for pruning, Continuous Sparsification also offers fast parallel ticket search, opening doors to new applications of the Lottery Ticket Hypothesis.
翻译:搜索高效的、稀有的深神经网络模型最突出的方式是修剪:训练一个密密的、过度参数化的网络和删除参数,通常采用手工制作的超光速化。此外,最近的彩票票假冒猜想,对于典型规模的神经网络来说,可以找到小的子网络,这些小网络在从零到零的训练中,与原始密集的对应方的性能相匹配。我们重新审视了裁剪算法的基本方面,指出了以往方法中缺失的成分,并开发了一种方法,即持续分解化,根据一种新颖的棘手 $\ell_0$正规化的近似值,搜索稀释网络。我们比较了基于典型神经网络的主要偏差方法以及搜索票的方法 -- -- 找到能够从早期的神经网络中成功再培训的稀疏小的子网络。 爱心结果显示,我们超越了两个目标的状态,跨模型和数据集,包括VGGG,在CFAR-10和ResNet-50在图像网络上培训的元素,这是根据一个难解的新的近似近似近似近似近似近似性搜索新门,此外,我们还选择了新的双向快速搜索。