Transfer learning is a classic paradigm by which models pretrained on large "upstream" datasets are adapted to yield good results on "downstream" specialized datasets. Generally, more accurate models on the "upstream" dataset tend to provide better transfer accuracy "downstream". In this work, we perform an in-depth investigation of this phenomenon in the context of convolutional neural networks (CNNs) trained on the ImageNet dataset, which have been pruned - that is, compressed by sparsifying their connections. We consider transfer using unstructured pruned models obtained by applying several state-of-the-art pruning methods, including magnitude-based, second-order, re-growth, lottery-ticket, and regularization approaches, in the context of twelve standard transfer tasks. In a nutshell, our study shows that sparse models can match or even outperform the transfer performance of dense models, even at high sparsities, and, while doing so, can lead to significant inference and even training speedups. At the same time, we observe and analyze significant differences in the behaviour of different pruning methods.
翻译:传输学习是一种典型的范例,通过这种模式,在大型“ 上流” 数据集上预先培训的模型能够适应在“ 下流” 专门数据集上产生良好结果。 一般来说, 在“ 上流” 数据集上更准确的模型往往能提供更好的传输准确性“ 下流 ” 。 在这项工作中,我们在通过图像网络数据集培训的进化神经网络背景下对这一现象进行了深入的调查,这些网络已经被修剪动了,也就是说,通过擦拭连接而压缩。 我们考虑使用通过应用若干最新裁剪方法获得的未结构的纯化模型进行转移,包括基于数量、二级、再增长、彩票和正规化的方法,在12项标准传输任务中提供更好的传输准确性。 我们的研究显示,即使在高度紧张的情况下,稀疏的模型也能够匹配甚至超过密度模型的传输性能,与此同时,可以导致重大的推论,甚至培训速度。 同时,我们观察并分析不同运行方法的行为上的重大差异。