Split learning and differential privacy are technologies with growing potential to help with privacy-compliant advanced analytics on distributed datasets. Attacks against split learning are an important evaluation tool and have been receiving increased research attention recently. This work's contribution is applying a recent feature space hijacking attack (FSHA) to the learning process of a split neural network enhanced with differential privacy (DP), using a client-side off-the-shelf DP optimizer. The FSHA attack obtains client's private data reconstruction with low error rates at arbitrarily set DP epsilon levels. We also experiment with dimensionality reduction as a potential attack risk mitigation and show that it might help to some extent. We discuss the reasons why differential privacy is not an effective protection in this setting and mention potential other risk mitigation methods.
翻译:分散学习和差异隐私是越来越有可能帮助对分布数据集进行符合隐私的先进分析的技术。袭击分裂学习是一个重要的评价工具,最近受到越来越多的研究关注。这项工作的贡献是利用客户对现成的DP优化器,将最近的地物空间劫持攻击应用于一个以差别隐私强化的分心神经网络(DP)的学习过程。FSHA攻击以任意设定的DPepsilon水平低误差率获得客户的私人数据重建。我们还尝试将维度减少作为潜在攻击风险的缓解,并表明这可能在某种程度上有所帮助。我们讨论了为什么差异隐私不能成为这一背景下有效保护的原因,并提到了其他可能的减少风险方法。