Deep Neural Networks (DNNs) needs to be both efficient and robust for practical uses. Quantization and structure simplification are promising ways to adapt DNNs to mobile devices, and adversarial training is the most popular method to make DNNs robust. In this work, we try to obtain both features by applying a convergent relaxation quantization algorithm, Binary-Relax (BR), to a robust adversarial-trained model, ResNets Ensemble via Feynman-Kac Formalism (EnResNet). We also discover that high precision, such as ternary (tnn) and 4-bit, quantization will produce sparse DNNs. However, this sparsity is unstructured under advarsarial training. To solve the problems that adversarial training jeopardizes DNNs' accuracy on clean images and the struture of sparsity, we design a trade-off loss function that helps DNNs preserve their natural accuracy and improve the channel sparsity. With our trade-off loss function, we achieve both goals with no reduction of resistance under weak attacks and very minor reduction of resistance under strong attcks. Together with quantized EnResNet with trade-off loss function, we provide robust models that have high efficiency.
翻译:深神经网络(DNN)需要既高效又稳健,以便实际使用。量化和结构简化是使DNN适应移动装置的很有希望的方法,而对抗性培训是使DNN变得稳健的最受欢迎的方法。在这项工作中,我们试图通过应用一个集中的放松量化算法(BR)来获得这两个特点,即Binary-Relax(BR),用一个强大的对抗性训练模式,即ResNets通过Feynman-Kac正规化(EnResNet)连接起来。我们还发现,高精度(tnn)和4bit(4bit)将产生稀薄的DNNN。然而,在反向培训中,这种宽度是非结构化的。为了解决对抗性培训危及DNNS对清洁图像的准确性以及震动性结构的问题,我们设计了一个交换损失功能,帮助DNNS保持其自然准确性,改进频道的紧张性。我们发现,由于我们的贸易损失功能,我们实现了两个目标,在较弱的攻击下不会减少抵抗力的攻击力的DNNND(ED),在强大的贸易效率方面都提供了强大的抑制。