Despite the great progress of neural network-based (NN-based) machinery fault diagnosis methods, their robustness has been largely neglected, for they can be easily fooled through adding imperceptible perturbation to the input. For fault diagnosis problems, in this paper, we reformulate various adversarial attacks and intensively investigate them under untargeted and targeted conditions. Experimental results on six typical NN-based models show that accuracies of the models are greatly reduced by adding small perturbations. We further propose a simple, efficient and universal scheme to protect the victim models. This work provides an in-depth look at adversarial examples of machinery vibration signals for developing protection methods against adversarial attack and improving the robustness of NN-based models.
翻译:尽管基于神经网络的(NN)机制缺陷诊断方法取得了巨大进展,但其稳健性在很大程度上一直被忽视,因为通过在输入中增加不可察觉的干扰,它们很容易被愚弄。关于过失诊断问题,我们在本文件中重新定义了各种对抗性攻击,并在没有针对性和针对性的条件下深入调查这些攻击。基于NN的六种典型模型的实验结果表明,通过增加小扰动,模型的灵敏度大为降低。我们进一步提出了保护受害者模型的简单、高效和普遍性计划。这项工作深入研究了机制振动信号的对抗性实例,以制定针对对抗性攻击的保护方法,提高基于NNN的模式的稳健性。