Recommender models play a vital role in various industrial scenarios, while often faced with the catastrophic forgetting problem caused by the fast shifting data distribution. To alleviate this problem, a common approach is to reuse knowledge from the historical data. However, preserving the vast and fast-accumulating data is hard, which causes dramatic storage overhead. Memorizing old data through a parametric knowledge base is then proposed, which compresses the vast amount of raw data into model parameters. Despite the flexibility, how to improve the memorization and generalization capabilities of the parametric knowledge base and suit the flexible information need of each instance are challenging. In this paper, we propose FINED to Feed INstance-wise information need with Essential and Disentangled parametric knowledge from past data for recommendation enhancement. Concretely, we train a knowledge extractor that extracts knowledge patterns of arbitrary order from past data and a knowledge encoder that memorizes the arbitrary order patterns, which serves as the retrieval key generator and memory network respectively in the following knowledge reusing phase. The whole process is regularized by the proposed two constraints, which improve the capabilities of the parametric knowledge base without increasing the size of it. The essential principle helps to compress the input into representative vectors that capture the task-relevant information and filter out the noisy information. The disentanglement principle reduces the redundancy of stored information and pushes the knowledge base to focus on capturing the disentangled invariant patterns. These two rules together promote rational compression of information for robust and generalized knowledge representations. Extensive experiments on two datasets justify the effectiveness of the proposed method.
翻译:暂无翻译