Secure multi-party machine learning allows several parties to build a model on their pooled data to increase utility while not explicitly sharing data with each other. We show that such multi-party computation can cause leakage of global dataset properties between the parties even when parties obtain only black-box access to the final model. In particular, a ``curious'' party can infer the distribution of sensitive attributes in other parties' data with high accuracy. This raises concerns regarding the confidentiality of properties pertaining to the whole dataset as opposed to individual data records. We show that our attack can leak population-level properties in datasets of different types, including tabular, text, and graph data. To understand and measure the source of leakage, we consider several models of correlation between a sensitive attribute and the rest of the data. Using multiple machine learning models, we show that leakage occurs even if the sensitive attribute is not included in the training data and has a low correlation with other attributes or the target variable.
翻译:安全的多党机器学习使几个缔约方能够在其集合数据的基础上建立一个模型,以提高其效用,同时又不相互明确分享数据。我们表明,这种多党计算即使当事方只获得最后模型的黑匣子访问权,也可能造成缔约方之间全球数据集属性的泄漏。特别是,“可疑”的一方可以非常精确地推断其他缔约方数据中敏感属性的分布。这引起了对整个数据集相关属性的保密性的关注,而不是个人数据记录。我们表明,我们的攻击可能泄漏不同类型数据集中的人口级属性,包括表格、文本和图表数据。为理解和测量渗漏源,我们考虑了敏感属性与数据其余部分之间的若干关联模式。我们使用多机学习模型表明,即使敏感属性没有包括在培训数据中,而且与其他属性或目标变量的相关性较低,也会发生渗漏。