Machine-learning architectures, such as Convolutional Neural Networks (CNNs) are vulnerable to adversarial attacks: inputs crafted carefully to force the system output to a wrong label. Since machine-learning is being deployed in safety-critical and security-sensitive domains, such attacks may have catastrophic security and safety consequences. In this paper, we propose for the first time to use hardware-supported approximate computing to improve the robustness of machine-learning classifiers. We show that successful adversarial attacks against the exact classifier have poor transferability to the approximate implementation. Surprisingly, the robustness advantages also apply to white-box attacks where the attacker has unrestricted access to the approximate classifier implementation: in this case, we show that substantially higher levels of adversarial noise are needed to produce adversarial examples. Furthermore, our approximate computing model maintains the same level in terms of classification accuracy, does not require retraining, and reduces resource utilization and energy consumption of the CNN. We conducted extensive experiments on a set of strong adversarial attacks; We empirically show that the proposed implementation increases the robustness of a LeNet-5, Alexnet and VGG-11 CNNs considerably with up to 50% by-product saving in energy consumption due to the simpler nature of the approximate logic.
翻译:革命神经网络(CNNs)等机械学习结构很容易受到对抗性攻击:精心设计的投入,以迫使系统输出错误标签。由于机器学习正在安全关键和安全敏感领域部署,这种攻击可能产生灾难性的安保和安全后果。在本文件中,我们首次提议使用硬件支持的近似计算来提高机器学习分类者的稳健性。我们显示,对精确分类者的成功的对准攻击的对抗性攻击不太易转移到大致执行上。令人惊讶的是,强性优势也适用于白箱攻击,攻击者可以不受限制地进入大致分类执行:在这种情况下,我们表明需要大幅提高对抗性噪音,才能产生对抗性例子。此外,我们的大致计算模型在分类准确性方面保持同样水平,不需要再培训,并减少CNN的资源利用和能源消耗。我们对一系列强烈的对抗性攻击进行了广泛的实验。我们从经验上看,拟议的执行提高了LNet-5、Alexnet和VGG-11 CNN的稳健性攻击的稳健性,使能源消费达到50%左右。