Embedded systems demand on-device processing of data using Neural Networks (NNs) while conforming to the memory, power and computation constraints, leading to an efficiency and accuracy tradeoff. To bring NNs to edge devices, several optimizations such as model compression through pruning, quantization, and off-the-shelf architectures with efficient design have been extensively adopted. These algorithms when deployed to real world sensitive applications, requires to resist inference attacks to protect privacy of users training data. However, resistance against inference attacks is not accounted for designing NN models for IoT. In this work, we analyse the three-dimensional privacy-accuracy-efficiency tradeoff in NNs for IoT devices and propose Gecko training methodology where we explicitly add resistance to private inferences as a design objective. We optimize the inference-time memory, computation, and power constraints of embedded devices as a criterion for designing NN architecture while also preserving privacy. We choose quantization as design choice for highly efficient and private models. This choice is driven by the observation that compressed models leak more information compared to baseline models while off-the-shelf efficient architectures indicate poor efficiency and privacy tradeoff. We show that models trained using Gecko methodology are comparable to prior defences against black-box membership attacks in terms of accuracy and privacy while providing efficiency.
翻译:嵌入式系统要求使用神经网络(NN)对数据进行设备处理,同时要符合记忆、动力和计算限制,从而实现效率和准确性权衡。为了将NNP转换到边缘装置,已经广泛采用了几种优化办法,例如通过裁剪、四分化和设计高效的现成建筑模型压缩模型,这些算法在部署到现实世界敏感应用软件时,必须抵制为保护用户隐私而进行攻击的推断性攻击,同时要保护隐私培训数据。然而,在设计NNNT IoT模型时,没有考虑到对推断性攻击的抵制。在这项工作中,我们分析了NNP的三维隐私-准确性-效率交易的权衡。我们分析了NNT装置的三维隐私-准确性交易,并提出了Gecko培训方法,我们明确增加了对私人推断的阻力,以此作为设计NNNP结构的标准,同时保护隐私。我们选择了昆虫化作为高效和私人模型的设计选择。我们选择了这种选择是受到黑观察驱动的,即压缩模型将隐私-准确性模型泄露到之前的保密性标准,而我们则使用经过培训的GForni-realfile-file-file-file-reviews