The huge supporting training data on the Internet has been a key factor in the success of deep learning models. However, this abundance of public-available data also raises concerns about the unauthorized exploitation of datasets for commercial purposes, which is forbidden by dataset licenses. In this paper, we propose a backdoor-based watermarking approach that serves as a general framework for safeguarding public-available data. By inserting a small number of watermarking samples into the dataset, our approach enables the learning model to implicitly learn a secret function set by defenders. This hidden function can then be used as a watermark to track down third-party models that use the dataset illegally. Unfortunately, existing backdoor insertion methods often entail adding arbitrary and mislabeled data to the training set, leading to a significant drop in performance and easy detection by anomaly detection algorithms. To overcome this challenge, we introduce a clean-label backdoor watermarking framework that uses imperceptible perturbations to replace mislabeled samples. As a result, the watermarking samples remain consistent with the original labels, making them difficult to detect. Our experiments on text, image, and audio datasets demonstrate that the proposed framework effectively safeguards datasets with minimal impact on original task performance. We also show that adding just 1% of watermarking samples can inject a traceable watermarking function and that our watermarking samples are stealthy and look benign upon visual inspection.
翻译:互联网中的支持性训练数据是深度学习模型成功的关键因素。然而,公共可用数据的丰富性也引发了人们对于数据集被未经授权的第三方商业用途所利用的担忧,而这种利用行为是被数据集许可证所禁止的。本文提出了一种基于后门(backdoor)水印技术的数据集防护方法,它作为保护公共数据的通用框架。通过将少量的水印样本插入数据集中,我们的方法使学习模型可以隐含地学习一种由防御者设置的机密函数。这个隐藏的函数可以被用作水印,以追踪非法使用数据集的第三方模型。然而,现有的后门添加方法通常涉及向训练集中添加任意的和错误的数据,这会导致性能显著下降,易于被异常检测算法所察觉。为了克服这个挑战,我们提出了一种干净标签后门水印技术框架,使用微小的扰动替换错误的样本。因此,水印样本保持与原始标签一致,使其难以被检测到。我们在文本、图像和音频数据集上的实验表明,所提出的框架在最小化对原始任务性能影响的同时有效地保护数据集。我们还展示了仅添加1%的水印样本即可注入可追踪的水印函数,并且我们的水印样本在视觉检查时是隐蔽且良性的。