Large pretrained language models (PLM) have become de facto news encoders in modern news recommender systems, due to their strong ability in comprehending textual content. These huge Transformer-based architectures, when finetuned on recommendation tasks, can greatly improve news recommendation performance. However, the PLM-based pretrain-finetune framework incurs high computational cost and energy consumption, primarily due to the extensive redundant processing of news encoding during each training epoch. In this paper, we propose the ``Only Encode Once'' framework for news recommendation (OLEO), by decoupling news representation learning from downstream recommendation task learning. The decoupled design makes content-based news recommender as green and efficient as id-based ones, leading to great reduction in computational cost and training resources. Extensive experiments show that our OLEO framework can reduce carbon emissions by up to 13 times compared with the state-of-the-art pretrain-finetune framework and maintain a competitive or even superior performance level. The source code is released for reproducibility.
翻译:暂无翻译