Link prediction (LP) algorithms propose to each node a ranked list of nodes that are currently non-neighbors, as the most likely candidates for future linkage. Owing to increasing concerns about privacy, users (nodes) may prefer to keep some of their connections protected or private. Motivated by this observation, our goal is to design a differentially private LP algorithm, which trades off between privacy of the protected node-pairs and the link prediction accuracy. More specifically, we first propose a form of differential privacy on graphs, which models the privacy loss only of those node-pairs which are marked as protected. Next, we develop DPLP , a learning to rank algorithm, which applies a monotone transform to base scores from a non-private LP system, and then adds noise. DPLP is trained with a privacy induced ranking loss, which optimizes the ranking utility for a given maximum allowed level of privacy leakage of the protected node-pairs. Under a recently-introduced latent node embedding model, we present a formal trade-off between privacy and LP utility. Extensive experiments with several real-life graphs and several LP heuristics show that DPLP can trade off between privacy and predictive performance more effectively than several alternatives.
翻译:链接预测( LP) 算法向每个节点建议一个排名不同的节点列表, 这些节点目前不是邻居, 最有可能成为未来连接的候选对象。 由于对隐私的日益关注, 用户( 节点) 可能更愿意保留某些连接。 受此观察的驱动, 我们的目标是设计一种有区别的私人 LP 算法, 将受保护节点的隐私和链接预测准确性进行交换。 更具体地说, 我们首先在图表上提出一种差异性能的隐私, 它只模拟那些标记为受保护的节点的隐私损失。 其次, 我们开发 DPLP, 学习排序算法, 从非私人 LP 系统应用单质转换为基本分数, 然后添加噪音。 DPLP 被培训为隐私诱导分级损失, 从而优化了受保护节点最大允许隐私渗漏程度的排序功能。 在最近引入的隐蔽嵌入模式下, 我们展示了隐私和 LP 通用性实验之间的正式交易, 而不是几次真实的 DVL 。