Differentially private (DP) synthetic data generation is a practical method for improving access to data as a means to encourage productive partnerships. One issue inherent to DP is that the "privacy budget" is generally "spent" evenly across features in the data set. This leads to good statistical parity with the real data, but can undervalue the conditional probabilities and marginals that are critical for predictive quality of synthetic data. Further, loss of predictive quality may be non-uniform across the data set, with subsets that correspond to minority groups potentially suffering a higher loss. In this paper, we develop ensemble methods that distribute the privacy budget "wisely" to maximize predictive accuracy of models trained on DP data, and "fairly" to bound potential disparities in accuracy across groups and reduce inequality. Our methods are based on the insights that feature importance can inform how privacy budget is allocated, and, further, that per-group feature importance and fairness-related performance objectives can be incorporated in the allocation. These insights make our methods tunable to social contexts, allowing data owners to produce balanced synthetic data for predictive analysis.
翻译:不同私人(DP)合成数据生成是改善数据获取的实用方法,以此鼓励生产性伙伴关系。 DP所固有的一个问题是,“隐私预算”通常在数据集各特征中“平均”“使用”。这导致与真实数据保持良好的统计均等,但可能低估对合成数据预测质量至关重要的有条件概率和边际值。此外,预测质量的丧失可能是整个数据集不统一的,其子群与可能遭受更大损失的少数群体相对应。在本文件中,我们开发了共同方法,“明智地”分配隐私预算,以最大限度地提高所培训的DP数据模型的预测准确性,以及“公平地”限制各群体之间在准确性方面的潜在差异并减少不平等。我们的方法基于特征重要性的洞察力,即隐私预算如何分配,以及每个群体的重要性和公平性业绩目标可以纳入分配中。这些洞察力使我们的方法具有社会背景,使数据所有者能够产生平衡的合成数据,用于预测分析。