Although Deep Neural Networks (DNN) have become the backbone technology of several ubiquitous applications, their deployment in resource-constrained machines, e.g., Internet of Things (IoT) devices, is still challenging. To satisfy the resource requirements of such a paradigm, collaborative deep inference with IoT synergy was introduced. However, the distribution of DNN networks suffers from severe data leakage. Various threats have been presented, including black-box attacks, where malicious participants can recover arbitrary inputs fed into their devices. Although many countermeasures were designed to achieve privacy-preserving DNN, most of them result in additional computation and lower accuracy. In this paper, we present an approach that targets the security of collaborative deep inference via re-thinking the distribution strategy, without sacrificing the model performance. Particularly, we examine different DNN partitions that make the model susceptible to black-box threats and we derive the amount of data that should be allocated per device to hide proprieties of the original input. We formulate this methodology, as an optimization, where we establish a trade-off between the latency of co-inference and the privacy-level of data. Next, to relax the optimal solution, we shape our approach as a Reinforcement Learning (RL) design that supports heterogeneous devices as well as multiple DNNs/datasets.
翻译:虽然深神经网络(DNN)已成为若干普遍存在应用的基干技术,但它们在资源限制的机器(例如,物联网(IOT)装置)中的部署仍然具有挑战性。为满足这种模式的资源要求,采用了与IOT协同作用的深层次协作推论。然而,DNN网络的分布存在严重的数据泄漏;提出了各种威胁,包括黑箱袭击,恶意参与者可以收回输入其装置的任意输入;虽然许多对策是为了实现隐私保护DNN,但大多数都导致额外的计算和低精确度。在本文件中,我们提出一种办法,通过重新思考分配战略,在不牺牲模型性能的情况下,针对合作深度推断的安全。特别是,我们检查不同的DNNN分区,使模型容易受到黑箱威胁,我们从中获取应分配的数据数量,以隐藏原始输入的专利。我们制定这一方法是为了优化,我们在此间,在共同定位的宽度之间建立起一种交易,即额外的计算和低精确度。我们作为最佳数据结构的升级,支持了我们软化的模型。