Modern deep learning involves training costly, highly overparameterized networks, thus motivating the search for sparser networks that can still be trained to the same accuracy as the full network (i.e. matching). Iterative magnitude pruning (IMP) is a state of the art algorithm that can find such highly sparse matching subnetworks, known as winning tickets. IMP operates by iterative cycles of training, masking smallest magnitude weights, rewinding back to an early training point, and repeating. Despite its simplicity, the underlying principles for when and how IMP finds winning tickets remain elusive. In particular, what useful information does an IMP mask found at the end of training convey to a rewound network near the beginning of training? How does SGD allow the network to extract this information? And why is iterative pruning needed? We develop answers in terms of the geometry of the error landscape. First, we find that$\unicode{x2014}$at higher sparsities$\unicode{x2014}$pairs of pruned networks at successive pruning iterations are connected by a linear path with zero error barrier if and only if they are matching. This indicates that masks found at the end of training convey the identity of an axial subspace that intersects a desired linearly connected mode of a matching sublevel set. Second, we show SGD can exploit this information due to a strong form of robustness: it can return to this mode despite strong perturbations early in training. Third, we show how the flatness of the error landscape at the end of training determines a limit on the fraction of weights that can be pruned at each iteration of IMP. Finally, we show that the role of retraining in IMP is to find a network with new small weights to prune. Overall, these results make progress toward demystifying the existence of winning tickets by revealing the fundamental role of error landscape geometry.
翻译:现代深层学习涉及培训成本昂贵、高度过度分解的网络,从而激励寻找仍然可以与整个网络(即匹配)一样精确地训练的稀疏网络。 迭代规模的运行( IMP) 是一种艺术算法的状态, 它可以找到如此高度稀少的匹配子网络, 被称为赢票。 IMP 以迭接的培训周期运作, 掩盖最小规模的重量, 重新回溯到早期的培训点, 并重复。 尽管它很简单, IMP 何时和如何赢得票的基本原则仍然遥不可及。 特别是, 在培训结束时找到的IMP 掩码会与整个网络同步网络连接的准确性网络。 SGD 如何允许网络提取这种信息? 为什么需要迭代接式的运行? 我们通过迭接式的培训周期来找到答案, 掩码的最小值 {x2014 $, 和 更高深度的IMP 值的值, 重度的根基数 $pair 仍然难以找到。 连续运行的网络的根基调的底原则, 由一条直径直线式的精度路路路路路路路路路路连接连接连接连接连接连接连接连接, 显示一个直径直径的路径的路径, 。