We study the secure stochastic convex optimization problem. A learner aims to learn the optimal point of a convex function through sequentially querying a (stochastic) gradient oracle. In the meantime, there exists an adversary who aims to free-ride and infer the learning outcome of the learner from observing the learner's queries. The adversary observes only the points of the queries but not the feedback from the oracle. The goal of the learner is to optimize the accuracy, i.e., obtaining an accurate estimate of the optimal point, while securing her privacy, i.e., making it difficult for the adversary to infer the optimal point. We formally quantify this tradeoff between learner's accuracy and privacy and characterize the lower and upper bounds on the learner's query complexity as a function of desired levels of accuracy and privacy. For the analysis of lower bounds, we provide a general template based on information theoretical analysis and then tailor the template to several families of problems, including stochastic convex optimization and (noisy) binary search. We also present a generic secure learning protocol that achieves the matching upper bound up to logarithmic factors.
翻译:我们研究安全的随机孔隙优化问题。 学习者的目的是通过顺序查询一个( 随机) 梯度或触摸器来学习 convex 函数的最佳点。 同时, 存在一个对手, 目的是通过观察学习者的询问来自由驾驶和推断学习者的学习结果。 对手只观察询问的要点, 而不是来自神器的反馈。 学习者的目标是优化准确估计最佳点的准确度, 同时确保她的隐私, 也就是使对手难以推算最佳点 。 我们正式量化了学习者准确性和隐私之间的这一权衡, 并将学习者查询复杂性的上下限定性为所期望的准确度和隐私水平的函数 。 为了分析下限, 我们根据信息理论分析提供一个一般模板, 然后将模板调整为问题的若干家庭, 包括随机相近的 convex 优化和 ( noisy) 二进式搜索 。 我们还提出了一个通用的安全学习协议, 以匹配上框的日志。