Training a good deep learning model requires substantial data and computing resources, which makes the resulting neural model a valuable intellectual property. To prevent the neural network from being undesirably exploited, non-transferable learning has been proposed to reduce the model generalization ability in specific target domains. However, existing approaches require labeled data for the target domain which can be difficult to obtain. Furthermore, they do not have the mechanism to still recover the model's ability to access the target domain. In this paper, we propose a novel unsupervised non-transferable learning method for the text classification task that does not require annotated target domain data. We further introduce a secret key component in our approach for recovering the access to the target domain, where we design both an explicit and an implicit method for doing so. Extensive experiments demonstrate the effectiveness of our approach.
翻译:培训一个良好的深层学习模式需要大量数据和计算资源,从而使由此形成的神经模型成为宝贵的知识产权。为了防止神经网络不值得不当地开发,提议进行不可转让的学习,以减少特定目标领域的模型概括能力。然而,现有办法要求为目标领域提供标签数据,而这些数据可能难以获得。此外,它们没有机制仍然恢复模型进入目标领域的能力。在本文件中,我们提议为文本分类任务采用一种新的、不受监督的、不可转让的学习方法,不需要附加说明的目标领域数据。我们还在恢复进入目标领域的方法中引入了一个秘密关键部分,我们为此设计了一种明确和隐含的方法。广泛的实验证明了我们的方法的有效性。