Training machine learning (ML) models is expensive in terms of computational power, amounts of labeled data and human expertise. Thus, ML models constitute intellectual property (IP) and business value for their owners. Embedding digital watermarks during model training allows a model owner to later identify their models in case of theft or misuse. However, model functionality can also be stolen via model extraction, where an adversary trains a surrogate model using results returned from a prediction API of the original model. Recent work has shown that model extraction is a realistic threat. Existing watermarking schemes are ineffective against IP theft via model extraction since it is the adversary who trains the surrogate model. In this paper, we introduce DAWN (Dynamic Adversarial Watermarking of Neural Networks), the first approach to use watermarking to deter model extraction IP theft. Unlike prior watermarking schemes, DAWN does not impose changes to the training process but it operates at the prediction API of the protected model, by dynamically changing the responses for a small subset of queries (e.g., <0.5%) from API clients. This set is a watermark that will be embedded in case a client uses its queries to train a surrogate model. We show that DAWN is resilient against two state-of-the-art model extraction attacks, effectively watermarking all extracted surrogate models, allowing model owners to reliably demonstrate ownership (with confidence $>1- 2^{-64}$), incurring negligible loss of prediction accuracy (0.03-0.5%).
翻译:计算机培训(ML)模型在计算能力、标签数据数量和人的专门知识方面费用昂贵。 因此, ML模型构成知识产权(IP)和所有人的商业价值。 在模型培训中嵌入数字水印号使模型所有人在盗窃或误用的情况下能够稍后确定其模型。 但是,模型功能也可以通过模型提取而被盗,在模型提取中,对手火车使用原模型预测的API结果返回的替代模型。最近的工作表明,模型提取是一种现实的威胁。现有的水标记计划对通过模型提取的IP盗窃是无效的,因为它是培训代理模型的对手。在本文中,我们引入DAWN(神经网络的动态Aversarial水标记),这是使用水标记来阻止模型提取IP盗窃的第一个方法。与以前的水标记计划不同,DAWN并不要求改变培训过程,而是在模型API的模型中操作,通过动态改变对一小类查询的答案(例如: < 0.5 mark)的答案,从API模型客户引入一个具有可靠性的准确性的数据。 我们的模型将展示一个具有可持续性的客户的模型,在模型中将一个具有可持续性的模型定位,将显示一个具有可持续性的SAWAWAWAWAW的定位的正确性定位的定位的模型用于一个测试。