Property inference attacks against machine learning (ML) models aim to infer properties of the training data that are unrelated to the primary task of the model, and have so far been formulated as binary decision problems, i.e., whether or not the training data have a certain property. However, in industrial and healthcare applications, the proportion of labels in the training data is quite often also considered sensitive information. In this paper we introduce a new type of property inference attack that unlike binary decision problems in literature, aim at inferring the class label distribution of the training data from parameters of ML classifier models. We propose a method based on \emph{shadow training} and a \emph{meta-classifier} trained on the parameters of the shadow classifiers augmented with the accuracy of the classifiers on auxiliary data. We evaluate the proposed approach for ML classifiers with fully connected neural network architectures. We find that the proposed \emph{meta-classifier} attack provides a maximum relative improvement of $52\%$ over state of the art.
翻译:对机器学习(ML)模型进行财产推断攻击的目的是推断培训数据与模型主要任务无关的属性,而且迄今为止已作为二进制决定问题拟订,即培训数据是否具有某种属性。然而,在工业和保健应用中,培训数据中的标签比例也常常被视为敏感信息。在本文中,我们引入一种新的财产推断攻击,与文献中的二进制决定问题不同,目的是从ML分类模型参数中推断培训数据的阶级标签分布。我们提议了一种基于\emph{shadow training} 和 \emph{meta-clasticle} 的方法,该方法以影子分类器的参数为基础,随着分类器对辅助数据的准确性而提高。我们评估了与完全相连的神经网络结构的 ML分类器的拟议方法。我们发现,拟议的\emph{meta-lication}攻击旨在从ML分类模型参数中推断培训数据的阶级标签分布。我们提议了一个方法,该方法以\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\2\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\