Data stewards and analysts can promote transparent and trustworthy science and policy-making by facilitating assessments of the sensitivity of published results to alternate analysis choices. For example, researchers may want to assess whether the results change substantially when different subsets of data points (e.g., sets formed by demographic characteristics) are used in the analysis, or when different models (e.g., with or without log transformations) are estimated on the data. Releasing the results of such stability analyses leaks information about the data subjects. When the underlying data are confidential, the data stewards and analysts may seek to bound this information leakage. We present methods for stability analyses that can satisfy differential privacy, a definition of data confidentiality providing such bounds. We use regression modeling as the motivating example. The basic idea is to split the data into disjoint subsets, compute a measure summarizing the difference between the published and alternative analysis on each subset, aggregate these subset estimates, and add noise to the aggregated value to satisfy differential privacy. We illustrate the methods using regressions in which an analyst compares coefficient estimates for different groups in the data, and in which analysts fit two different models on the data.
翻译:暂无翻译