Machine learning promotes the continuous development of signal processing in various fields, including network traffic monitoring, EEG classification, face identification, and many more. However, massive user data collected for training deep learning models raises privacy concerns and increases the difficulty of manually adjusting the network structure. To address these issues, we propose a privacy-preserving neural architecture search (PP-NAS) framework based on secure multi-party computation to protect users' data and the model's parameters/hyper-parameters. PP-NAS outsources the NAS task to two non-colluding cloud servers for making full advantage of mixed protocols design. Complement to the existing PP machine learning frameworks, we redesign the secure ReLU and Max-pooling garbled circuits for significantly better efficiency ($3 \sim 436$ times speed-up). We develop a new alternative to approximate the Softmax function over secret shares, which bypasses the limitation of approximating exponential operations in Softmax while improving accuracy. Extensive analyses and experiments demonstrate PP-NAS's superiority in security, efficiency, and accuracy.
翻译:机器学习促进各领域信号处理的持续发展,包括网络交通监测、EEG分类、面部识别和许多其他领域。然而,为培训深层学习模式而收集的大量用户数据引起了隐私问题,并增加了人工调整网络结构的困难。为了解决这些问题,我们提议在安全多方计算的基础上建立一个隐私保护神经结构搜索(PP-NAS)框架,以保护用户的数据和模型参数/湿度参数。PP-NAS将NAS的任务外包给两个非混合云层服务器,以充分利用混合协议的设计。我们对现有PP机器学习框架进行补充,重新设计安全的RELU和最大组合电路,以大大提高效率(3\sim 436乘以加速率)。我们开发了一种新的替代方法,以近似对秘密股份的软体功能,这种功能绕过软体中适应性指数操作的限制,同时提高准确性。广泛的分析和实验显示PP-NAS在安全、效率和准确性方面的优势。