In the field of machine learning, many problems can be formulated as the minimax problem, including reinforcement learning, generative adversarial networks, to just name a few. So the minimax problem has attracted a huge amount of attentions from researchers in recent decades. However, there is relatively little work on studying the privacy of the general minimax paradigm. In this paper, we focus on the privacy of the general minimax setting, combining differential privacy together with minimax optimization paradigm. Besides, via algorithmic stability theory, we theoretically analyze the high probability generalization performance of the differentially private minimax algorithm under the strongly-convex-strongly-concave condition. To the best of our knowledge, this is the first time to analyze the generalization performance of general minimax paradigm, taking differential privacy into account.
翻译:在机器学习领域,可以将许多问题发展为小型问题,包括强化学习、基因对抗网络等等。因此,微型问题近几十年来吸引了研究人员的大量关注。然而,研究一般微型模型隐私的工作相对较少。在本文中,我们侧重于一般微型模型的隐私,将不同隐私与小型最大优化模式结合起来。此外,我们通过算法稳定性理论,从理论上分析了在强力凝固的组合条件下,差异性私人小型算法的高度概率概括性表现。就我们所知,这是第一次分析一般微型模型的通用性表现,同时考虑到差异性隐私。