Auditing Data Provenance (ADP), i.e., auditing if a certain piece of data has been used to train a machine learning model, is an important problem in data provenance. The feasibility of the task has been demonstrated by existing auditing techniques, e.g., shadow auditing methods, under certain conditions such as the availability of label information and the knowledge of training protocols for the target model. Unfortunately, both of these conditions are often unavailable in real applications. In this paper, we introduce Data Provenance via Differential Auditing (DPDA), a practical framework for auditing data provenance with a different approach based on statistically significant differentials, i.e., after carefully designed transformation, perturbed input data from the target model's training set would result in much more drastic changes in the output than those from the model's non-training set. This framework allows auditors to distinguish training data from non-training ones without the need of training any shadow models with the help of labeled output data. Furthermore, we propose two effective auditing function implementations, an additive one and a multiplicative one. We report evaluations on real-world data sets demonstrating the effectiveness of our proposed auditing technique.
翻译:审计数据证明(ADP),即,如果使用某一部分数据来训练机器学习模型,审计就是数据出处的一个重要问题,任务的可行性已经通过现有的审计技术,例如影子审计方法,在某些条件下,例如提供标签资料和对目标模型培训协议的了解,在标签资料和对目标模型培训协议的了解等某些条件下得到证明。不幸的是,这两个条件在实际应用中往往都不存在。在本文件中,我们引入了通过差异审计获得的数据证明(DPDA),这是一个审计数据出处的实用框架,根据统计上的重大差异,采用了不同的方法,即经过精心设计的转换后,目标模型培训集的渗透输入数据将会导致产出发生比模型非培训成套数据更剧烈的变化。这个框架使审计员能够将培训数据与非培训数据区分开来,而不需要任何影子模型的培训,从而帮助提供标签产出数据。此外,我们提出了两种有效的审计职能执行,一种是补充性的,一种是重复性的。我们报告对真实世界数据组的评价将表明我们提出的审计方法的有效性。