Deep Neural Networks are vulnerable to adversarial attacks. Neural Architecture Search (NAS), one of the driving tools of deep neural networks, demonstrates superior performance in prediction accuracy in various machine learning applications. However, it is unclear how it performs against adversarial attacks. Given the presence of a robust teacher, it would be interesting to investigate if NAS would produce robust neural architecture by inheriting robustness from the teacher. In this paper, we propose Robust Neural Architecture Search by Cross-Layer Knowledge Distillation (RNAS-CL), a novel NAS algorithm that improves the robustness of NAS by learning from a robust teacher through cross-layer knowledge distillation. Unlike previous knowledge distillation methods that encourage close student/teacher output only in the last layer, RNAS-CL automatically searches for the best teacher layer to supervise each student layer. Experimental result evidences the effectiveness of RNAS-CL and shows that RNAS-CL produces small and robust neural architecture.
翻译:深神经网络很容易受到对抗性攻击。 神经结构搜索(NAS)是深神经网络的驱动工具之一,它展示了各种机器学习应用在预测准确性方面的优异性能。 然而,它如何应对对抗性攻击还不清楚。 鉴于有一位强健的教师的存在,调查NAS是否通过继承教师的强健性来产生强健的神经结构。在本文中,我们提议通过跨层知识蒸馏(RNAS-CL)进行强固神经结构搜索(RNAS-CL),这是一种新颖的NAS算法,通过跨层知识蒸馏向强健的教师学习来提高NAS的稳健性。与以往鼓励学生/教师在最后一层完成近身产出的知识蒸馏方法不同,RNAS-CL自动搜索最佳教师层以监督每个学生层。实验结果证明RNAS-CL的有效性,并显示RNAS-CL产生小型和强健的神经结构。