Many applications seek to produce differentially private statistics on sensitive data. Traditional approaches in the centralised model rely on a trusted aggregator to gather the raw data, aggregate statistics and introduce appropriate noise. Recent work has tried to relax the trust assumptions and reduce the need for trusted entities. However, such systems can trade off trust for increased noise and still require complete trust in some participants. Moreover, they do not prevent a malicious entity from introducing adversarial noise to skew the result or unmask some inputs. In this paper, we introduce the notion of ``verifiable differential privacy with covert security''. The purpose is to ensure both privacy of the client's data and assurance that the output is not subject to any form of adversarial manipulation. The result is that everyone is assured that the noise used for differential privacy has been generated correctly, but no one can determine what the noise was. In the event of a malicious entity attempting to pervert the protocol, their actions will be detected with a constant probability negligibly close to one. We show that such verifiable privacy is practical and can be implemented at scale.
翻译:许多应用软件都试图对敏感数据进行有差别的私人统计数据。在集中模式中,传统做法依赖于一个可靠的聚合器来收集原始数据、综合统计数据和引进适当的噪音。最近的工作试图放松信任假设,减少对受信任实体的需求。然而,这种系统可以互换信任,增加噪音,但仍需要完全信任一些参与者。此外,它们并不妨碍恶意实体引入对抗性噪音来扭曲协议结果或排除某些投入。在本文中,我们引入了“可核实的隐私差异”的概念,并隐蔽安全。其目的是确保客户数据的隐私和保证产出不会受到任何形式的对抗性操纵。其结果是,每个人都确信用于不同隐私的噪音产生正确,但没有人能够确定什么声音。如果恶意实体试图扭曲协议,它们的行动将会被不断检测到,其概率极小。我们表明,这种可核查的隐私是实用的,可以大规模实施。