The huge supporting training data on the Internet has been a key factor in the success of deep learning models. However, this abundance of public-available data also raises concerns about the unauthorized exploitation of datasets for commercial purposes, which is forbidden by dataset licenses. In this paper, we propose a backdoor-based watermarking approach that serves as a general framework for safeguarding public-available data. By inserting a small number of watermarking samples into the dataset, our approach enables the learning model to implicitly learn a secret function set by defenders. This hidden function can then be used as a watermark to track down third-party models that use the dataset illegally. Unfortunately, existing backdoor insertion methods often entail adding arbitrary and mislabeled data to the training set, leading to a significant drop in performance and easy detection by anomaly detection algorithms. To overcome this challenge, we introduce a clean-label backdoor watermarking framework that uses imperceptible perturbations to replace mislabeled samples. As a result, the watermarking samples remain consistent with the original labels, making them difficult to detect. Our experiments on text, image, and audio datasets demonstrate that the proposed framework effectively safeguards datasets with minimal impact on original task performance. We also show that adding just 1% of watermarking samples can inject a traceable watermarking function and that our watermarking samples are stealthy and look benign upon visual inspection.
翻译:巨量的公开训练数据是深度学习模型成功的关键因素。但是,这种大量数据的公开共享也引起了对数据集未经授权被用于商业目的的担忧,而这是由数据集许可证所禁止的。在本文中,我们提出了基于反向水印的保护框架,用于保护公开共享的数据集。该方法插入一小部分含水印的样本到数据集中,使学习模型能够隐含地学习防御者设置的一个秘密函数。这个隐藏的函数可以用作水印,从而追踪使用数据集的第三方模型。然而,现有的反向水印插入方法往往需要向训练集中添加任意的和误标记的数据,导致性能显著下降,并且容易被异常检测算法检测出来。为了克服这个挑战,我们介绍了一种干净标签的反向水印框架,使用无感知扰动替换误标记的样本。因此,水印样本与原始标签保持一致,难以被检测出来。我们在文本、图像和音频数据集上的实验表明,所提出的框架可以在对原始任务性能的影响最小的情况下,有效地保护数据集。我们还展示了只添加1%的水印样本即可注入可追踪的水印函数,这些水印样本具有隐蔽性,并且在视觉检查时看起来是良性的。