Despite their tremendous successes, convolutional neural networks (CNNs) incur high computational/storage costs and are vulnerable to adversarial perturbations. Recent works on robust model compression address these challenges by combining model compression techniques with adversarial training. But these methods are unable to improve throughput (frames-per-second) on real-life hardware while simultaneously preserving robustness to adversarial perturbations. To overcome this problem, we propose the method of Generalized Depthwise-Separable (GDWS) convolution -- an efficient, universal, post-training approximation of a standard 2D convolution. GDWS dramatically improves the throughput of a standard pre-trained network on real-life hardware while preserving its robustness. Lastly, GDWS is scalable to large problem sizes since it operates on pre-trained models and doesn't require any additional training. We establish the optimality of GDWS as a 2D convolution approximator and present exact algorithms for constructing optimal GDWS convolutions under complexity and error constraints. We demonstrate the effectiveness of GDWS via extensive experiments on CIFAR-10, SVHN, and ImageNet datasets. Our code can be found at https://github.com/hsndbk4/GDWS.
翻译:尽管取得了巨大成功,但革命性神经网络(CNNs)的计算/存储成本很高,而且容易受到对抗性干扰。最近关于强力模型压缩的工程将模型压缩技术与对抗性培训相结合,从而应对这些挑战。但是,这些方法无法改善现实硬件的吞吐量(每秒框架),同时保持对对抗性扰动的强力。为了克服这一问题,我们建议采用通用深度-可分离性(GDWS)系统(GDWS)系统(GDWS)系统(GDWS)系统(GDWS)系统(GDWS)系统(GDWS)系统(GDWS)系统(GDWS)系统(GDWS)系统(GDWS)系统(GDWS)系统(GDWS)系统(GDWS)在保持其强力的同时,通过大量测试来大幅改进实际硬件的预培训网络的吞吐量。最后,GDWSSSWSSS(SH)系统(AS)在CIFAR/DSDSDDDS(SADS)10, SHN(SDSDGDS)中, SADSDSDS(SADGDSDSDDDSDS/10) 和ADSDSDSDSDSDS(S(S)系统(SADS)系统(SADS)系统(S)系统(S)系统(SADS)系统(SADGDS)系统(SADDDDDDS)系统(SADDDDDDDSDSDSDSDSDSDDDS)系统(SADS/10)。