Kernel mean embedding is a useful tool to represent and compare probability measures. Despite its usefulness, kernel mean embedding considers infinite-dimensional features, which are challenging to handle in the context of differentially private data generation. A recent work proposes to approximate the kernel mean embedding of data distribution using finite-dimensional random features, which yields analytically tractable sensitivity. However, the number of required random features is excessively high, often ten thousand to a hundred thousand, which worsens the privacy-accuracy trade-off. To improve the trade-off, we propose to replace random features with Hermite polynomial features. Unlike the random features, the Hermite polynomial features are ordered, where the features at the low orders contain more information on the distribution than those at the high orders. Hence, a relatively low order of Hermite polynomial features can more accurately approximate the mean embedding of the data distribution compared to a significantly higher number of random features. As demonstrated on several tabular and image datasets, the use of Hermite polynomial features is better suited for private data generation than the use of random features.
翻译:内核嵌入是代表并比较概率度量的有用工具。 尽管内核嵌入具有实用性, 内核意味着考虑无限维特征, 这对于在不同的私人数据生成中处理具有挑战性。 最近的一项工作提议, 使用有限维随机特征来比较数据分布内核意味着嵌入内核, 从而产生可分析的敏感度。 但是, 所需的随机特征数量太高, 通常为一万至十万个, 使隐私- 准确性交易更加恶化。 为改善权衡, 我们提议用Hermite 多面体特征取代随机特征。 与随机特征不同, Hermite 多面体特征是定置的, 低端的特性含有关于数据分布的信息比高端的特性要多。 因此, Hermite 聚体特征的相对较低顺序可以更准确地估计数据分布内嵌入平均值, 而随机特征的数量要高得多。 正如几个表格和图像数据集所显示的那样, Hermite 多元特征的使用比随机特性更适合私人生成数据。