This study explores how robots and generative approaches can be used to mount successful false-acceptance adversarial attacks on signature verification systems. Initially, a convolutional neural network topology and data augmentation strategy are explored and tuned, producing an 87.12% accurate model for the verification of 2,640 human signatures. Two robots are then tasked with forging 50 signatures, where 25 are used for the verification attack, and the remaining 25 are used for tuning of the model to defend against them. Adversarial attacks on the system show that there exists an information security risk; the Line-us robotic arm can fool the system 24% of the time and the iDraw 2.0 robot 32% of the time. A conditional GAN finds similar success, with around 30% forged signatures misclassified as genuine. Following fine-tune transfer learning of robotic and generative data, adversarial attacks are reduced below the model threshold by both robots and the GAN. It is observed that tuning the model reduces the risk of attack by robots to 8% and 12%, and that conditional generative adversarial attacks can be reduced to 4% when 25 images are presented and 5% when 1000 images are presented.
翻译:这项研究探索了如何利用机器人和基因方法对签名核查系统进行成功的虚假接受对抗性攻击。 最初, 探索并调整了一场革命性神经网络地形和数据增强战略, 产生了一个87.12%的精确模型, 用于核查2, 640个人类签名。 之后, 两个机器人的任务是伪造50个签名, 其中25个用于核查攻击, 其余25个用于调整模型以防御它们。 对系统的反向攻击表明存在信息安全风险; 线- us 机器人臂可以愚弄系统24%的时间和iDraw 2.0机器人32%的时间。 一个有条件的GAN 发现类似的成功, 大约30%的伪造签名被错误归类为真实。 在微调传输机器人和基因化数据后, 机器人和 GAN 的对抗性攻击被降低到模型的门槛之下。 观察到, 调整模型可以将机器人攻击的风险降低到8%和12%, 并且当25个图像显示时, 5 % 。