Due to limited computational cost and energy consumption, most neural network models deployed in mobile devices are tiny. However, tiny neural networks are commonly very vulnerable to attacks. Current research has proved that larger model size can improve robustness, but little research focuses on how to enhance the robustness of tiny neural networks. Our work focuses on how to improve the robustness of tiny neural networks without seriously deteriorating of clean accuracy under mobile-level resources. To this end, we propose a multi-objective oneshot network architecture search (NAS) algorithm to obtain the best trade-off networks in terms of the adversarial accuracy, the clean accuracy and the model size. Specifically, we design a novel search space based on new tiny blocks and channels to balance model size and adversarial performance. Moreover, since the supernet significantly affects the performance of subnets in our NAS algorithm, we reveal the insights into how the supernet helps to obtain the best subnet under white-box adversarial attacks. Concretely, we explore a new adversarial training paradigm by analyzing the adversarial transferability, the width of the supernet and the difference between training the subnets from scratch and fine-tuning. Finally, we make a statistical analysis for the layer-wise combination of certain blocks and channels on the first non-dominated front, which can serve as a guideline to design tiny neural network architectures for the resilience of adversarial perturbations.
翻译:由于计算成本和能源消耗有限,在移动设备中部署的多数神经网络模型非常小,但是,小型神经网络通常很容易受到攻击。目前的研究证明,更大的模型规模可以提高强健性,但很少的研究侧重于如何提高小型神经网络的强健性。我们的工作重点是如何提高微小神经网络的稳健性,而不会在移动级别的资源下严重降低清洁准确性。为此,我们建议采用多目标单点网络结构搜索算法(NAS),以在对抗性准确性、清洁准确性和模型大小方面获得最佳的交换网络。具体地说,我们设计一个新的对抗性网络设计模式,分析对抗性网络的对抗性转移性、超级网络的宽度和在新小块块和频道上的差异,以平衡模型的规模和对抗性工作。此外,由于超级网络极大地影响我们NAS算法中子网络的性能,我们揭示了超级网络如何帮助在白箱对抗性攻击下获得最佳的子网络。具体地说,我们探索一种新的对抗性培训模式,分析对抗性转移能力、超级网络的宽度以及从头等网络的子网络对面和精确设计结构的区别。最后,我们可以对准的统计结构进行一个不中的一种统计式设计结构结构的组合式设计分析。我们可以使某种面面面面面面面面面面结构结构结构结构对准。