Due to data privacy issues, accelerating networks with tiny training sets has become a critical need in practice. Previous methods mainly adopt filter-level pruning to accelerate networks with scarce training samples. In this paper, we reveal that dropping blocks is a fundamentally superior approach in this scenario. It enjoys a higher acceleration ratio and results in a better latency-accuracy performance under the few-shot setting. To choose which blocks to drop, we propose a new concept namely recoverability to measure the difficulty of recovering the compressed network. Our recoverability is efficient and effective for choosing which blocks to drop. Finally, we propose an algorithm named PRACTISE to accelerate networks using only tiny sets of training images. PRACTISE outperforms previous methods by a significant margin. For 22% latency reduction, PRACTISE surpasses previous methods by on average 7% on ImageNet-1k. It also enjoys high generalization ability, working well under data-free or out-of-domain data settings, too. Our code is at https://github.com/DoctorKey/Practise.
翻译:由于数据隐私问题,加速使用少量培训的网络已成为实践中的一项关键需要。以前的方法主要是采用过滤级的剪切,以加速使用稀缺培训样本的网络。在本文中,我们发现,在这种情景中,投弃区块是一种基本优越的方法。它具有较高的加速率,在微小的镜头设置下,其潜伏性性能更高。要选择要下降的区块,我们建议一个新的概念,即可回收性,以衡量恢复压缩网络的困难。我们的可回收性对于选择要丢弃的块是高效和有效的。最后,我们建议使用名为 PRACTISE 的加速网络,仅使用少量的培训图像。 PLACTISE 以显著的幅度比以往的方法更优。对于降低22 % 的悬浮度, PLACTISE 在图像Net-1k上比以往的方法平均高出7 %。它也拥有高度的普及能力,在无数据或外部数据设置下运行。我们的代码在 https://github.com/DoctorKas/Practise。</s>