The training and creation of deep learning model is usually costly, thus it can be regarded as an intellectual property (IP) of the model creator. However, malicious users who obtain high-performance models may illegally copy, redistribute, abuse the models, or use the models to provide prediction services without permission. To deal with such security threats, a few deep neural networks (DNN) IP protection methods have been proposed in recent years. This paper attempts to provide a review of the existing DNN IP protection works and also an outlook. First, we propose the first taxonomy for DNN IP protection methods in terms of five attributes: scenario, capacity, type, mechanism, and attack resistance. Second, we present a survey on existing DNN IP protection works in terms of the above five attributes, especially focusing on the challenges these methods face, whether these methods can provide proactive protection, and their resistances to different levels of attacks. Third, we analyze the potential attacks on DNN IP protection methods. Fourth, we propose a systematic evaluation method for DNN IP protection methods. Lastly, challenges and future works are presented.
翻译:培训和创建深层次学习模式通常费用高昂,因此可以被视为模型创建者的知识产权(IP),但获得高性能模型的恶意使用者可能非法复制、重新分配、滥用模型,或利用模型提供预测服务,为了应对这种安全威胁,近年来提出了几条深层神经网络(DNN)IP保护方法。本文件试图对现有DNNIP保护工作进行审查,并展望。首先,我们提议从五个属性(情景、能力、类型、机制和攻击抵抗)的角度对DNNIP保护方法进行首次分类:情景、能力、类型、机制和攻击抵抗。第二,我们从上述五个属性的角度对现有的DNNNIP保护工作进行调查,特别侧重于这些方法所面临的挑战,这些方法能否提供预防性保护,以及这些方法对不同程度攻击的抵抗力。第三,我们分析了对DNNNIP保护方法的潜在攻击。第四,我们提议了对DNNIP保护方法的系统评估方法。最后,提出了挑战和未来的工作。