Differential privacy (DP) offers a theoretical upper bound on the potential privacy leakage of analgorithm, while empirical auditing establishes a practical lower bound. Auditing techniques exist forDP training algorithms. However machine learning can also be made private at inference. We propose thefirst framework for auditing private prediction where we instantiate adversaries with varying poisoningand query capabilities. This enables us to study the privacy leakage of four private prediction algorithms:PATE [Papernot et al., 2016], CaPC [Choquette-Choo et al., 2020], PromptPATE [Duan et al., 2023],and Private-kNN [Zhu et al., 2020]. To conduct our audit, we introduce novel techniques to empiricallyevaluate privacy leakage in terms of Renyi DP. Our experiments show that (i) the privacy analysis ofprivate prediction can be improved, (ii) algorithms which are easier to poison lead to much higher privacyleakage, and (iii) the privacy leakage is significantly lower for adversaries without query control than thosewith full control.
翻译:暂无翻译