In the field of statistical disclosure control, the tradeoff between data confidentiality and data utility is measured by comparing disclosure risk and information loss metrics. Distance based metrics such as the mean absolute error (MAE), mean squared error (MSE), mean variation (IL1), and its scaled alternative (IL1s) are popular information loss measures for numerical microdata. However, the fact that these measures are unbounded makes it is difficult to compare them against disclosure risk measures which are usually bounded between 0 and 1. In this note, we propose rank-based versions of the MAE and MSE metrics that are bounded in the same range as the disclosure risk metrics. We empirically compare the proposed bounded metrics against the distance-based metrics in a series of experiments where the metrics are evaluated over multiple masked datasets, generated by the application of increasing amounts of perturbation (e.g., by adding increasing amounts of noise). Our results show that the proposed bounded metrics produce similar rankings as the traditional ones (as measured by Spearman correlation), suggesting that they are a viable additions to the toolbox of distance-based information loss metrics currently in use in the SDC literature.
翻译:暂无翻译