Federated learning (FL) allows participants to collaboratively train machine and deep learning models while protecting data privacy. However, the FL paradigm still presents drawbacks affecting its trustworthiness since malicious participants could launch adversarial attacks against the training process. Related work has studied the robustness of horizontal FL scenarios under different attacks. However, there is a lack of work evaluating the robustness of decentralized vertical FL and comparing it with horizontal FL architectures affected by adversarial attacks. Thus, this work proposes three decentralized FL architectures, one for horizontal and two for vertical scenarios, namely HoriChain, VertiChain, and VertiComb. These architectures present different neural networks and training protocols suitable for horizontal and vertical scenarios. Then, a decentralized, privacy-preserving, and federated use case with non-IID data to classify handwritten digits is deployed to evaluate the performance of the three architectures. Finally, a set of experiments computes and compares the robustness of the proposed architectures when they are affected by different data poisoning based on image watermarks and gradient poisoning adversarial attacks. The experiments show that even though particular configurations of both attacks can destroy the classification performance of the architectures, HoriChain is the most robust one.
翻译:联邦学习(FL)允许参与者在保护数据隐私的同时合作培训机器和深层次学习模式,然而,FL范式仍然有缺陷,影响其可信度,因为恶意参与者可以对培训过程发起对抗性攻击;相关工作研究了不同攻击情况下横向FL情景的稳健性;然而,缺乏评估分散式垂直FL的稳健性的工作,并将之与受对抗性攻击影响的横向FL结构进行比较;因此,这项工作提出了三个分散式FL架构,一个是横向FL架构,两个是纵向情景,即Hori Chain、VertiChain和VertiComb。这些架构提出了适合横向和纵向情景的不同神经网络和培训协议。随后,一个分散式、隐私保留式、与非IID数据结合的案例用于对手写数字进行分类,以评价这三个架构的性能。最后,一套实验对拟议架构的稳健性进行了计算和比较,如果它们受到基于图像水印和加速中毒对抗性攻击的不同数据中毒影响,那么拟议结构的强性。实验表明,即使其中一种是强型的H级结构,但最强型攻击也是强型的。