We present a series of new differentially private (DP) algorithms with dimension-independent margin guarantees. For the family of linear hypotheses, we give a pure DP learning algorithm that benefits from relative deviation margin guarantees, as well as an efficient DP learning algorithm with margin guarantees. We also present a new efficient DP learning algorithm with margin guarantees for kernel-based hypotheses with shift-invariant kernels, such as Gaussian kernels, and point out how our results can be extended to other kernels using oblivious sketching techniques. We further give a pure DP learning algorithm for a family of feed-forward neural networks for which we prove margin guarantees that are independent of the input dimension. Additionally, we describe a general label DP learning algorithm, which benefits from relative deviation margin bounds and is applicable to a broad family of hypothesis sets, including that of neural networks. Finally, we show how our DP learning algorithms can be augmented in a general way to include model selection, to select the best confidence margin parameter.
翻译:我们提出了一系列新的有差异的私人(DP)算法,带有维度独立的保证。对于线性假设的家族,我们给出了纯粹的DP学习算法,它受益于相对偏差保证,以及有效的DP学习算法,并带有边际保证。我们还提出了一种新的有效的DP学习算法,其中含有对内核和内核(如高山内核)的移动-内核(如高山内核)的内置假设的保证,并指出如何利用模糊的草图技术将我们的结果推广到其他内核。我们进一步为一个供养-前进神经网络的家族提供了纯DP学习算法,我们证明这种网络的边际保证独立于输入层面。此外,我们描述了一种通用的DP学习算法,它受益于相对偏差的边际界限,并适用于包括神经网络在内的广泛假设组合。最后,我们展示了我们的DP学习算法如何能够在包括模型选择在内的一般方式上扩大,以选择最佳信任差参数。