Capitalise on deep learning models, offering Natural Language Processing (NLP) solutions as a part of the Machine Learning as a Service (MLaaS) has generated handsome revenues. At the same time, it is known that the creation of these lucrative deep models is non-trivial. Therefore, protecting these inventions intellectual property rights (IPR) from being abused, stolen and plagiarized is vital. This paper proposes a practical approach for the IPR protection on recurrent neural networks (RNN) without all the bells and whistles of existing IPR solutions. Particularly, we introduce the Gatekeeper concept that resembles the recurrent nature in RNN architecture to embed keys. Also, we design the model training scheme in a way such that the protected RNN model will retain its original performance iff a genuine key is presented. Extensive experiments showed that our protection scheme is robust and effective against ambiguity and removal attacks in both white-box and black-box protection schemes on different RNN variants. Code is available at https://github.com/zhiqin1998/RecurrentIPR
翻译:利用深层次学习模式,提供自然语言处理(NLP)解决方案,作为机器学习服务的一部分,提供了自然语言处理(NLP)解决方案,创造了丰厚的收入;同时,众所周知,创建这些利润丰厚的深层模式并非三重模式;因此,保护这些发明知识产权不受滥用、盗窃和被玷污至关重要;本文件建议对经常性神经网络(RNNN)的知识产权保护采取切实可行的办法,而没有现有知识产权解决方案的所有钟声和哨声。特别是,我们引入了类似于RNN结构中经常性性质的“门卫”概念,以嵌入钥匙。此外,我们还设计了示范培训计划,使受保护的RNNN模式在提供真正钥匙的情况下将保持其原有性能。广泛的实验表明,我们的保护计划对白箱和黑盒保护计划在不同RNN变量上的模糊性和清除性攻击行为是有力和有效的。代码见https://github.com/zhiqin1998/RentimentIRPR。