Recently, quantum classifiers have been known to be vulnerable to adversarial attacks, where quantum classifiers are fooled by imperceptible noises to have misclassification. In this paper, we propose one first theoretical study that utilizing the added quantum random rotation noise can improve the robustness of quantum classifiers against adversarial attacks. We connect the definition of differential privacy and demonstrate the quantum classifier trained with the natural presence of additive noise is differentially private. Lastly, we derive a certified robustness bound to enable quantum classifiers to defend against adversarial examples supported by experimental results.
翻译:最近,人们知道量子分类者很容易受到对抗性攻击,在这种攻击中,量子分类者被无法察觉的噪音所愚弄,以致被错误分类。在本文中,我们提出第一项理论研究,即利用增加的量子随机旋转噪音可以提高量子分类者对抗对抗对抗性攻击的稳健性。我们把差异隐私的定义联系起来,并证明经过训练的量子分类者与添加性噪音的自然存在是不同的隐私。最后,我们获得了经认证的稳健性,使量子分类者能够抵御实验结果所支持的对抗性例子。