In deep learning applications, the architectures of deep neural networks are crucial in achieving high accuracy. Many methods have been proposed to search for high-performance neural architectures automatically. However, these searched architectures are prone to adversarial attacks. A small perturbation of the input data can render the architecture to change prediction outcomes significantly. To address this problem, we propose methods to perform differentiable search of robust neural architectures. In our methods, two differentiable metrics are defined to measure architectures' robustness, based on certified lower bound and Jacobian norm bound. Then we search for robust architectures by maximizing the robustness metrics. Different from previous approaches which aim to improve architectures' robustness in an implicit way: performing adversarial training and injecting random noise, our methods explicitly and directly maximize robustness metrics to harvest robust architectures. On CIFAR-10, ImageNet, and MNIST, we perform game-based evaluation and verification-based evaluation on the robustness of our methods. The experimental results show that our methods 1) are more robust to various norm-bound attacks than several robust NAS baselines; 2) are more accurate than baselines when there are no attacks; 3) have significantly higher certified lower bounds than baselines.
翻译:在深层学习应用中,深心神经网络的架构对于实现高精度至关重要。 许多方法被提议自动搜索高性能神经结构。 但是, 这些搜索的架构容易发生对抗性攻击。 输入数据的轻微扰动可以使结构显著改变预测结果。 为了解决这个问题, 我们提出了对强力神经结构进行不同搜索的方法。 在我们的方法中, 根据认证的较低约束度和Jacobian规范约束度, 界定了两种不同的计量标准来测量结构的稳健性。 然后, 我们通过尽量扩大稳健度测量标准来寻找稳健的架构。 与以前旨在以隐含方式改进结构稳健性的方法不同: 进行对抗性培训和注射随机噪音, 我们的方法可以明确和直接地最大限度地扩大稳健度度测量结构。 在 CICAR-10 、 图像网络 和 MNIST 上, 我们对方法的稳健性进行基于游戏的评价和基于核查的评价。 实验结果显示, 我们的方法 1) 比几个稳健的NAS基线更加稳健。 2) 在没有验证的基线时, 比基线更精确。