We develop an algorithm to improve the performance of a pre-trained model under concept shift without retraining the model from scratch when only unannotated samples of initial concepts are accessible. We model this problem as a domain adaptation problem, where the source domain data is inaccessible during model adaptation. The core idea is based on consolidating the intermediate internal distribution, learned to represent the source domain data, after adapting the model. We provide theoretical analysis and conduct extensive experiments to demonstrate that the proposed method is effective.
翻译:我们开发了一种算法,在概念转变下改进预先培训的模型的性能,不从零开始对模型进行再培训,因为只有初步概念的无附加说明的样本是可以获得的。我们将这一问题作为领域适应问题模型,在模型调整期间无法获得源域数据。核心理念的基础是整合中间内部分配,在调整模型后学会代表源域数据。我们提供了理论分析,并进行了广泛的实验,以证明拟议方法有效。