We provide a robust defence to adversarial attacks on discriminative algorithms. Neural networks are naturally vulnerable to small, tailored perturbations in the input data that lead to wrong predictions. On the contrary, generative models attempt to learn the distribution underlying a dataset, making them inherently more robust to small perturbations. We use Boltzmann machines for discrimination purposes as attack-resistant classifiers, and compare them against standard state-of-the-art adversarial defences. We find improvements ranging from 5% to 72% against attacks with Boltzmann machines on the MNIST dataset. We furthermore complement the training with quantum-enhanced sampling from the D-Wave 2000Q annealer, finding results comparable with classical techniques and with marginal improvements in some cases. These results underline the relevance of probabilistic methods in constructing neural networks and highlight a novel scenario of practical relevance where quantum computers, even with limited hardware capabilites, could provide advantages over classical computers. This work is dedicated to the memory of Peter Wittek.
翻译:我们为对歧视性算法的对抗性攻击提供了有力的防御。神经网络自然容易在输入数据中受到小的、定制的干扰,从而导致错误的预测。相反,基因模型试图学习数据集背后的分布,使其内在更加强大到小扰动。我们利用布尔茨曼机器来进行歧视,作为抵抗攻击的分类师,并对照标准的最新对抗性防御进行对比。我们发现,在使用Boltzmann机器对MNIST数据集进行攻击方面,改进幅度从5%到72%不等。我们用D-Wave 2000Q 麻醉器的量子强化取样作为培训的补充,寻找与古典技术相近的结果,有些则稍有改进。这些结果突出表明,在建造神经网络时,概率性方法具有相关性,并突出了量子计算机,即使是有限的硬件稳定剂,也能为古典计算机提供优势的新型实用情景。这项工作是专门用来纪念Peter Witek的。