The tension between persuasion and privacy preservation is common in real-world settings. Online platforms should protect the privacy of web users whose data they collect, even as they seek to disclose information about these data to selling advertising spaces. Similarly, hospitals may share patient data to attract research investments with the obligation to preserve patients' privacy. To deal with these issues, we develop a framework to study Bayesian persuasion under differential privacy constraints, where the sender must design an optimal signaling scheme for persuasion while guaranteeing the privacy of each agent's private information in the database. To understand how privacy constraints affect information disclosure, we explore two perspectives within Bayesian persuasion: one views the mechanism as releasing a posterior about the private data, while the other views it as sending an action recommendation. The posterior-based formulation helps consider privacy-utility tradeoffs, quantifying how the tightness of privacy constraints impacts the sender's optimal utility. For any instance in a common utility function family and a wide range of privacy levels, a significant constant utility gap can be found between any two of the three conditions: $\epsilon$-differential privacy constraint, relaxation $(\epsilon,\delta)$-differential privacy constraint, and no privacy constraint. We further geometrically characterize optimal signaling schemes under different types of constraints ($\epsilon$-differential privacy, $(\epsilon,\delta)$-differential privacy and Renyi differential privacy), all of which can be seen as finding concave hulls in constrained posterior regions. Meanwhile, by taking the action-based view of persuasion, we provide polynomial-time algorithms for computing optimal differentially private signaling schemes, as long as a mild homogeneous condition is met.
翻译:暂无翻译