The ability of Deep Neural Networks to approximate highly complex functions is the key to their success. This benefit, however, often comes at the cost of a large model size, which challenges their deployment in resource-constrained environments. To limit this issue, pruning techniques can introduce sparsity in the models, but at the cost of accuracy and adversarial robustness. This paper addresses these critical issues and introduces Deadwooding, a novel pruning technique that exploits a Lagrangian Dual method to encourage model sparsity while retaining accuracy and ensuring robustness. The resulting model is shown to significantly outperform the state-of-the-art studies in measures of robustness and accuracy.
翻译:深神经网络近似高度复杂功能的能力是其成功的关键,但这一效益往往以庞大的模型规模为代价,而模型规模又难以在资源受限制的环境中部署。为了限制这一问题,修剪技术可以在模型中引入零散,但以精确性和对抗性强力为代价。本文件讨论这些关键问题,并介绍死木,这是一种利用拉格朗格两极法鼓励模型宽敞,同时保持准确性和确保稳健性的新颖的修剪技术。由此形成的模型在稳健性和准确性方面大大超过最新研究。