Requiring less data for accurate models, few-shot learning has shown robustness and generality in many application domains. However, deploying few-shot models in untrusted environments may inflict privacy concerns, e.g., attacks or adversaries that may breach the privacy of user-supplied data. This paper studies the privacy enhancement for the few-shot learning in an untrusted environment, e.g., the cloud, by establishing a novel privacy-preserved embedding space that preserves the privacy of data and maintains the accuracy of the model. We examine the impact of various image privacy methods such as blurring, pixelization, Gaussian noise, and differentially private pixelization (DP-Pix) on few-shot image classification and propose a method that learns privacy-preserved representation through the joint loss. The empirical results show how privacy-performance trade-off can be negotiated for privacy-enhanced few-shot learning.
翻译:对准确模型要求较少的数据,少见的学习在许多应用领域显示了稳健性和一般性,然而,在不信任的环境中部署少见的模型可能会引起隐私问题,例如攻击或对手可能侵犯用户提供的数据的隐私。本文研究了在不信任的环境中,如云层,为少见的学习加强隐私的问题,为此建立了一个保护隐私的新型嵌入空间,保护数据隐私并保持模型的准确性。我们研究了各种图像隐私方法的影响,如模糊、像素化、高山噪音和微小图像分类上差异化的私人像素化(DP-Pix),并提出了一种通过共同损失学习隐私代表的方法。经验结果显示,如何为隐私-表现交易谈判,以利隐私-光鲜的学习。