Deep neural network (DNN) models are valuable intellectual property of model owners, constituting a competitive advantage. Therefore, it is crucial to develop techniques to protect against model theft. Model ownership resolution (MOR) is a class of techniques that can deter model theft. A MOR scheme enables an accuser to assert an ownership claim for a suspect model by presenting evidence, such as a watermark or fingerprint, to show that the suspect model was stolen or derived from a source model owned by the accuser. Most of the existing MOR schemes prioritize robustness against malicious suspects, ensuring that the accuser will win if the suspect model is indeed a stolen model. In this paper, we show that common MOR schemes in the literature are vulnerable to a different, equally important but insufficiently explored, robustness concern: a malicious accuser. We show how malicious accusers can successfully make false claims against independent suspect models that were not stolen. Our core idea is that a malicious accuser can deviate (without detection) from the specified MOR process by finding (transferable) adversarial examples that successfully serve as evidence against independent suspect models. To this end, we first generalize the procedures of common MOR schemes and show that, under this generalization, defending against false claims is as challenging as preventing (transferable) adversarial examples. Via systematic empirical evaluation we demonstrate that our false claim attacks always succeed in all prominent MOR schemes with realistic configurations, including against a real-world model: Amazon's Rekognition API.
翻译:深度神经网络(DNN)模型是模型拥有者的有价值的知识产权,构成了竞争优势。因此,开发技术以防止模型被盗是至关重要的。模型所有权解决方案(MOR)是一类可阻止模型盗窃的技术。MOR方案使得原告可以通过呈现证据(如水印或指纹),来表明被告模型是被盗或派生自原告拥有的源模型。大多数现有的MOR方案优先考虑防范有恶意的被告,确保在被告模型确实是被盗模型的情况下原告必胜。本文中,我们表明文献中的常见MOR方案容易遭受假的原告的攻击,这是一个同样重要但尚未充分研究的鲁棒性问题。我们展示了恶意原告如何成功地对独立被告模型提出虚假的权利主张,而这些模型并没有被盗。我们的核心思想是,恶意原告可以通过找到(可转移的)对抗样本,从而偏离(不被检测到)指定的MOR过程,这些对抗样本可以成功作为针对独立被告模型的证据。为此,我们首先推广了常见的MOR方案的程序,并表明在这个推广的概念下,防范假的主张与防止(可转移的)对抗样本一样具有挑战性。通过系统的实验评估,我们证明了我们的假成分攻击总是能够成功地攻击所有具有现实配置的突出的MOR方案,包括对亚马逊的Rekognition API这一真实世界模型。