The compute-intensive nature of neural networks (NNs) limits their deployment in resource-constrained environments such as cell phones, drones, autonomous robots, etc. Hence, developing robust sparse models fit for safety-critical applications has been an issue of longstanding interest. Though adversarial training with model sparsification has been combined to attain the goal, conventional adversarial training approaches provide no formal guarantee that the models would be robust against any rogue samples in a restricted space around a benign sample. Recently proposed verified local robustness techniques provide such a guarantee. This is the first paper that combines the ideas from verified local robustness and dynamic sparse training to develop `SparseVLR'-- a novel framework to search verified locally robust sparse networks. Obtained sparse models exhibit accuracy and robustness comparable to their dense counterparts at sparsity as high as 99%. Furthermore, unlike most conventional sparsification techniques, SparseVLR does not require a pre-trained dense model, reducing the training time by 50%. We exhaustively investigated SparseVLR's efficacy and generalizability by evaluating various benchmark and application-specific datasets across several models.
翻译:神经网络(NNs)的计算密集性质限制了其在诸如手机、无人驾驶飞机、自主机器人等资源受限制的环境中的部署。 因此,开发适合安全关键应用的强健稀释模型是一个长期关注的问题。虽然将模型封闭化的对抗性培训结合起来,但常规对抗性培训办法并不能正式保证这些模型在良性样本周围的有限空间对任何无赖样本具有很强的抗御力。最近提出的经核实的本地稳健性技术提供了这样的保证。这是第一份文件,其中综合了经核实的本地稳健性和动态稀缺性培训的观点,以开发“SparseVLR”——一个寻找经核实的本地强健的稀有网络的新框架。获得的稀有模型显示的准确性和稳健性,与在99%的宽广度上密集的对应方相近。此外,与大多数常规的喷雾技术不同的是,SparseVLR并不需要事先经过训练的密集型模型,将培训时间减少50%。我们通过对多种模型评估各种基准和具体应用数据集,详尽地调查了SparseVLR的功效和一般可及可及可贵性。