In order to address real-world problems, deep learning models are jointly trained on many classes. However, in the future, some classes may become restricted due to privacy/ethical concerns, and the restricted class knowledge has to be removed from the models that have been trained on them. The available data may also be limited due to privacy/ethical concerns, and re-training the model will not be possible. We propose a novel approach to address this problem without affecting the model's prediction power for the remaining classes. Our approach identifies the model parameters that are highly relevant to the restricted classes and removes the knowledge regarding the restricted classes from them using the limited available training data. Our approach is significantly faster and performs similar to the model re-trained on the complete data of the remaining classes.
翻译:为了解决现实世界的问题,在很多班级上共同培训深层学习模式,但是,今后,由于隐私/伦理问题,有些班级可能会受到限制,而有限的班级知识必须从经过培训的班级中去除。现有的数据也可能由于隐私/伦理问题而受到限制,再培训这个模式是不可能的。我们提出一种新的方法来解决这个问题,同时不影响模型对其余班级的预测能力。我们的方法确定了与限制班级高度相关的模型参数,并利用有限的现有培训数据将关于限制班级的知识从他们身上删除。我们的方法大大加快,并且与在其余班级完整数据方面经过再培训的模式相似。