Vulnerability of various machine learning methods to adversarial examples has been recently explored in the literature. Power systems which use these vulnerable methods face a huge threat against adversarial examples. To this end, we first propose a signal-specific method and a universal signal-agnostic method to attack power systems using generated adversarial examples. Black-box attacks based on transferable characteristics and the above two methods are also proposed and evaluated. We then adopt adversarial training to defend systems against adversarial attacks. Experimental analyses demonstrate that our signal-specific attack method provides less perturbation compared to the FGSM (Fast Gradient Sign Method), and our signal-agnostic attack method can generate perturbations fooling most natural signals with high probability. What's more, the attack method based on the universal signal-agnostic algorithm has a higher transfer rate of black-box attacks than the attack method based on the signal-specific algorithm. In addition, the results show that the proposed adversarial training improves robustness of power systems to adversarial examples.
翻译:文献中最近探讨了各种机器学习方法对对抗性实例的脆弱性。使用这些脆弱方法的动力系统面对对抗性实例的巨大威胁。为此目的,我们首先提出一种针对信号的方法和一种使用生成的对抗性实例攻击动力系统的普遍信号-不可知性方法。基于可转移特性的黑箱攻击和上述两种方法也得到提议和评价。然后我们采用对抗性训练来防御对抗性攻击系统。实验分析表明,与FGSM(远度渐进信号方法)相比,我们的信号-不可知性攻击方法提供了较少的扰动性,而我们的信号-不可知性攻击方法可以产生极有可能愚弄最自然信号的干扰性。此外,基于普遍信号-不可知性算法的攻击方法的黑箱攻击率高于基于信号特定算法的攻击方法。此外,结果显示,拟议的对抗性训练提高了对对抗性例子的强力系统。