Split Learning (SL) and Federated Learning (FL) are two prominent distributed collaborative learning techniques that maintain data privacy by allowing clients to never share their private data with other clients and servers, and fined extensive IoT applications in smart healthcare, smart cities, and smart industry. Prior work has extensively explored the security vulnerabilities of FL in the form of poisoning attacks. To mitigate the effect of these attacks, several defenses have also been proposed. Recently, a hybrid of both learning techniques has emerged (commonly known as SplitFed) that capitalizes on their advantages (fast training) and eliminates their intrinsic disadvantages (centralized model updates). In this paper, we perform the first ever empirical analysis of SplitFed's robustness to strong model poisoning attacks. We observe that the model updates in SplitFed have significantly smaller dimensionality as compared to FL that is known to have the curse of dimensionality. We show that large models that have higher dimensionality are more susceptible to privacy and security attacks, whereas the clients in SplitFed do not have the complete model and have lower dimensionality, making them more robust to existing model poisoning attacks. Our results show that the accuracy reduction due to the model poisoning attack is 5x lower for SplitFed compared to FL.
翻译:分化学习(SL)和联邦学习(FL)是两个显著的分布式合作学习技术,它们通过让客户永远不与其他客户和服务器分享其私人数据,维护数据隐私,并罚款智能医疗、智能城市和智能行业的大量IOT应用软件,对智能医疗、智能城市和智能行业进行了大量IOT应用。以前的工作已经以中毒袭击的形式广泛探索了FL的安全脆弱性。为了减轻这些袭击的影响,还提出了几项防御建议。最近,两种学习技术的混合(通常称为SpletFed)利用了他们的优势(快速培训)并消除了其内在劣势(集中化模型更新 ) 。 在本文中,我们对SlitFed的强力与强力模型中毒袭击的强度进行了第一次实证分析。我们观察到,SlitFed的模型更新与已知有维度诅咒的FL相比,其维度要小得多。我们显示,具有更高维度的大型模型更容易受到隐私和安全攻击,而Slited Fed的客户没有完整的模型,而其维度则比现有的模型中毒袭击更强。我们的结果显示,比F的精确性降低。