Deep neural networks achieve state-of-the-art performance in a variety of tasks by extracting a rich set of features from unstructured data, however this performance is closely tied to model size. Modern techniques for inducing sparsity and reducing model size are (1) network pruning, (2) training with a sparsity inducing penalty, and (3) training a binary mask jointly with the weights of the network. We study different sparsity inducing penalties from the perspective of Bayesian hierarchical models and present a novel penalty called Hierarchical Adaptive Lasso (HALO) which learns to adaptively sparsify weights of a given network via trainable parameters. When used to train over-parametrized networks, our penalty yields small subnetworks with high accuracy without fine-tuning. Empirically, on image recognition tasks, we find that HALO is able to learn highly sparse network (only 5% of the parameters) with significant gains in performance over state-of-the-art magnitude pruning methods at the same level of sparsity. Code is available at https://github.com/skyler120/sparsity-halo.
翻译:深神经网络在各种任务中取得最先进的性能,从非结构化的数据中提取出丰富的特征,但这种性能与模型大小密切相关。 吸引夸夸其谈和缩小模型尺寸的现代技术是:(1) 网络修剪,(2) 训练与网络的重量相结合,训练一个双面面罩。 我们从巴伊西亚的等级模型的角度研究不同的弥漫性诱发惩罚,并提出一种叫作高层次适应性适应性适应性适应性拉索(HALO)的新惩罚,它通过可训练参数来学习对特定网络的重量进行适应性透析。 当用于培训过度平衡的网络时,我们的处罚产生高度精准的子网络,而不作微调。 在图像识别任务方面,我们发现HALO能够学习高度稀少的网络(仅占参数的5% ), 在同一级别上, 其性能在状态-艺术规模调整方法上获得显著的收益。 代码可在 https://github.com/skyler120/sparity-halo上查阅。