We investigate the security of Split Learning---a novel collaborative machine learning framework that enables peak performance by requiring minimal resources consumption. In the present paper, we expose vulnerabilities of the protocol and demonstrate its inherent insecurity by introducing general attack strategies targeting the reconstruction of clients' private training sets. More prominently, we show that a malicious server can actively hijack the learning process of the distributed model and bring it into an insecure state that enables inference attacks on clients' data. We implement different adaptations of the attack and test them on various datasets as well as within realistic threat scenarios. We demonstrate that our attack is able to overcome recently proposed defensive techniques aimed at enhancing the security of the split learning protocol. Finally, we also illustrate the protocol's insecurity against malicious clients by extending previously devised attacks for Federated Learning. To make our results reproducible, we made our code available at https://github.com/pasquini-dario/SplitNN_FSHA.
翻译:我们调查了Split Learning - - 一个新的协作机器学习框架的安全性,它通过要求最低的资源消耗来达到顶点。在本文件中,我们暴露了协议的弱点,并通过采用针对客户私人培训组重建的一般性攻击战略来表明其固有的不安全性。更明显的是,我们显示恶意服务器能够积极劫持分布式模式的学习过程,使其进入一个可以推断客户数据攻击的不安全状态。我们实施了不同的攻击调整,并在各种数据集和现实的威胁情景中测试了这些攻击。我们表明,我们的攻击能够克服最近提出的旨在增强分裂式学习协议安全的防御技术。最后,我们还通过扩大以前设计的对联邦学习组织的攻击来说明协议对恶意客户的不安全性。为了使我们的结果能够重新得到体现,我们在https://github.com/pasquini-dario/Splitann_FSHA中公布了我们的代码。