Deep Neural Networks (DNNs) have been shown to be prone to adversarial attacks. Memristive crossbars, being able to perform Matrix-Vector-Multiplications (MVMs) efficiently, are used to realize DNNs on hardware. However, crossbar non-idealities have always been devalued since they cause errors in performing MVMs, leading to computational accuracy losses in DNNs. Several software-based defenses have been proposed to make DNNs adversarially robust. However, no previous work has demonstrated the advantage conferred by the crossbar non-idealities in unleashing adversarial robustness. We show that the intrinsic hardware non-idealities yield adversarial robustness to the mapped DNNs without any additional optimization. We evaluate the adversarial resilience of state-of-the-art DNNs (VGG8 & VGG16 networks) using benchmark datasets (CIFAR-10, CIFAR-100 & Tiny Imagenet) across various crossbar sizes. We find that crossbar non-idealities unleash significantly greater adversarial robustness (>10-20%) in crossbar-mapped DNNs than baseline software DNNs. We further assess the performance of our approach with other state-of-the-art efficiency-driven adversarial defenses and find that our approach performs significantly well in terms of reducing adversarial loss.
翻译:深心神经网络(DNNS) 被证明容易发生对抗性攻击。 能够高效地执行矩阵- Vexctor- Mulditions(MMMMM) 的模棱十字栏能够高效地执行矩阵- Vect- MINS(MMM), 用于在硬件上实现 DNNS 。 然而, 跨边非标准的非理想性(MMMMMM) 被使用时, 总是被贬低。 但是, 已经提出过几种基于软件的防御, 使 DNNPs (DG8 & VGG16 网络) 具有对抗性弹性, 导致 DNMs 的计算性损失。 但是, 以往没有一项工作证明, 跨边非理想性非理想性能在释放对立性强的强力上具有优势。 我们发现, 内置非标准性非理想性(NNNF) 性(W) 性(WH) 性能(WHG8 & VG16 网络) 大幅降低其他标准性(D) 性(DNNNP) 性(W) 性(W) 性(W) 性) 性(W) 性(W), 性能(W) 性(W) 性) 性能(WNNNP) 性(W) 。