Collaborative inference has recently emerged as an attractive framework for applying deep learning to Internet of Things (IoT) applications by splitting a DNN model into several subpart models among resource-constrained IoT devices and the cloud. However, the reconstruction attack was proposed recently to recover the original input image from intermediate outputs that can be collected from local models in collaborative inference. For addressing such privacy issues, a promising technique is to adopt differential privacy so that the intermediate outputs are protected with a small accuracy loss. In this paper, we provide the first systematic study to reveal insights regarding the effectiveness of differential privacy for collaborative inference against the reconstruction attack. We specifically explore the privacy-accuracy trade-offs for three collaborative inference models with four datasets (SVHN, GTSRB, STL-10, and CIFAR-10). Our experimental analysis demonstrates that differential privacy can practically be applied to collaborative inference when a dataset has small intra-class variations in appearance. With the (empirically) optimized privacy budget parameter in our study, the differential privacy technique incurs accuracy loss of 0.476%, 2.066%, 5.021%, and 12.454% on SVHN, GTSRB, STL-10, and CIFAR-10 datasets, respectively, while thwarting the reconstruction attack.
翻译:最近出现了合作推论,这是将DNN模型分为资源受限制的 IOT 设备和云层中几个子模型,从而成为将DNN模型分为若干子模型,从而在互联网上深入学习事物应用的有吸引力的框架。然而,最近提议进行重建攻击,以恢复从当地模型合作推论中收集的中间产出的原始输入图像。为了解决这类隐私问题,一种有希望的方法是采用不同的隐私,从而保护中间产出,并略微降低精确度。在本文件中,我们提供了首次系统研究,以揭示对重建攻击进行协作推论的隐私差异隐私权差异的有效性。我们特别探讨了三种协作推论模型(SVHN、GTSRRB、STL-10和CIFAR-10)的隐私精确性权衡取舍(SVH、2.066、5.02L和12-10RFAR)的精确度损失。我们的实验分析表明,当一个数据集出现少量内部差异时,在合作推论上实际上可以应用差异性隐私。我们的研究中(即)优化的隐私预算参数参数,不同的隐私技术使0.476%、2.066%、2.06、2.06%、S-10-10RBS-10 % 和12RBAS分别数据重建数据。