We investigate the security of Split Learning -- a novel collaborative machine learning framework that enables peak performance by requiring minimal resources consumption. In the present paper, we expose vulnerabilities of the protocol and demonstrate its inherent insecurity by introducing general attack strategies targeting the reconstruction of clients' private training sets. More prominently, we show that a malicious server can actively hijack the learning process of the distributed model and bring it into an insecure state that enables inference attacks on clients' data. We implement different adaptations of the attack and test them on various datasets as well as within realistic threat scenarios. We demonstrate that our attack is able to overcome recently proposed defensive techniques aimed at enhancing the security of the split learning protocol. Finally, we also illustrate the protocol's insecurity against malicious clients by extending previously devised attacks for Federated Learning. To make our results reproducible, we made our code available at https://github.com/pasquini-dario/SplitNN_FSHA.
翻译:我们调查了 " 分化学习 " 的安全性,这是一个创新的协作机器学习框架,它通过要求最低的资源消耗来达到顶点。在本文件中,我们暴露了协议的脆弱性,并通过采用针对客户私人培训组重建的一般性攻击战略来表明其固有的不安全性。更突出的是,我们显示恶意服务器可以积极劫持分布式模式的学习过程,并将它带入一个可以推断客户数据攻击的不安全状态。我们实施了对袭击的不同调整,并在各种数据集和现实的威胁情景中测试了这些数据。我们证明,我们的攻击能够克服最近提出的旨在加强分解学习协议安全的防御技术。最后,我们还通过扩大以前设计的对联邦学习组织的攻击来说明协议对恶意客户的不安全性。为了使我们的结果能够重新得到体现,我们在https://github.com/pasquini-dario/Splitann_FSHA中公布了我们的代码。