Collaborative inference has recently emerged as an intriguing framework for applying deep learning to Internet of Things (IoT) applications, which works by splitting a DNN model into two subpart models respectively on resource-constrained IoT devices and the cloud. Even though IoT applications' raw input data is not directly exposed to the cloud in such framework, revealing the local-part model's intermediate output still entails privacy risks. For mitigation of privacy risks, differential privacy could be adopted in principle. However, the practicality of differential privacy for collaborative inference under various conditions remains unclear. For example, it is unclear how the calibration of the privacy budget epsilon will affect the protection strength and model accuracy in presence of the state-of-the-art reconstruction attack targeting collaborative inference, and whether a good privacy-utility balance exists. In this paper, we provide the first systematic study to assess the effectiveness of differential privacy for protecting collaborative inference in presence of the reconstruction attack, through extensive empirical evaluations on various datasets. Our results show differential privacy can be used for collaborative inference when confronted with the reconstruction attack, with insights provided about privacyutility trade-offs. Specifically, across the evaluated datasets, we observe there exists a suitable privacy budget range (particularly 100<=epsilon<=200 in our evaluation) providing a good tradeoff between utility and privacy protection. Our key observation drawn from our study is that differential privacy tends to perform better in collaborative inference for datasets with smaller intraclass variations, which, to our knowledge, is the first easy-toadopt practical guideline.
翻译:最近出现了一种令人感兴趣的协作推论,认为这是一个将深层次的隐私差异应用到Thents(IoT)应用互联网应用的令人感兴趣的框架,这一框架将DNN模型分成两个子模型,分别用于资源限制的 IoT 设备和云层。尽管Iot应用的原始输入数据并没有直接暴露于这种框架中的云层,但揭示了当地部分模型的中间输出仍然包含隐私风险。为了减少隐私风险,原则上可以采用不同的隐私差异。然而,在不同条件下合作推断的隐私差异的实用性仍然不清楚。例如,在进行隐私预算Epsilon校准将如何影响保护力度和模型准确性,而当出现以协作为目的的重建攻击时,IoT应用程序的原始输入数据数据数据是否直接暴露,我们通过广泛的经验评估不同隐私差异来评估差异性隐私差异性。 我们的评估结果显示,在与重建攻击中进行的合作性评估时,在进行更精确的准确的准确性评估时,我们进行内部评估时如何进行更精确的保密性评估。