An attack on deep learning systems where intelligent machines collaborate to solve problems could cause a node in the network to make a mistake on a critical judgment. At the same time, the security and privacy concerns of AI have galvanized the attention of experts from multiple disciplines. In this research, we successfully mounted adversarial attacks on a federated learning (FL) environment using three different datasets. The attacks leveraged generative adversarial networks (GANs) to affect the learning process and strive to reconstruct the private data of users by learning hidden features from shared local model parameters. The attack was target-oriented drawing data with distinct class distribution from the CIFAR- 10, MNIST, and Fashion-MNIST respectively. Moreover, by measuring the Euclidean distance between the real data and the reconstructed adversarial samples, we evaluated the performance of the adversary in the learning processes in various scenarios. At last, we successfully reconstructed the real data of the victim from the shared global model parameters with all the applied datasets.
翻译:智能机器合作解决问题的深层次学习系统受到攻击,可能导致网络中的节点在批判性判断上犯错误。 与此同时,大赦国际的安全和隐私问题引起了来自多个学科的专家的注意。在这项研究中,我们成功地利用三个不同的数据集对联合学习环境进行了对抗性攻击。攻击利用了基因对抗网络来影响学习过程,并努力通过从共同的当地模型参数中学习隐藏的特征来重建用户的私人数据。这次攻击是针对目标的绘图数据,分别由CIFAR-10、MNIST和Fashion-MNIST提供不同等级的数据。此外,通过测量Euclidean真实数据与经过重建的对立样本之间的距离,我们评估了不同情景中对手在学习过程中的表现。最后,我们成功地用所有应用的数据集从共享的全球模型参数中重建了受害者的真实数据。