Differential privacy is often studied in one of two models. In the central model, a single analyzer has the responsibility of performing a privacy-preserving computation on data. But in the local model, each data owner ensures their own privacy. Although it removes the need to trust the analyzer, local privacy comes at a price: a locally private protocol is less accurate than a centrally private counterpart when solving many learning and estimation problems. Protocols in the shuffle model are designed to attain the best of both worlds: recent work has shown high accuracy is possible with only a mild trust assumption. This survey paper gives an overview of novel shuffle protocols, along with lower bounds that establish the limits of the new model. We also summarize work that show the promise of interactivity in the shuffle model.
翻译:不同隐私通常在两种模型中研究。 在中央模型中, 单个分析师负责对数据进行隐私保存计算。 但在当地模型中, 每个数据所有人都确保自己的隐私。 虽然它消除了信任分析师的需要, 但当地隐私是有代价的: 解决许多学习和估算问题时, 本地私人协议不如中央私人协议准确。 洗牌模型中的协议是为了达到两个世界的最佳目的而设计的: 最近的工作显示, 高精确度是有可能的, 只有轻微的信任假设。 本调查文件概述了新的洗牌协议, 以及设定新模型界限的较低界限。 我们还总结了在洗牌模型中显示互动前景的工作。