News recommendation systems play a critical role in alleviating information overload by delivering personalized content. A key challenge lies in jointly modeling multi-view representations of news articles and capturing the dynamic, dual-scale nature of user interests-encompassing both short- and long-term preferences. Prior methods often rely on single-view features or insufficiently model user behavior across time. In this work, we introduce Co-NAML-LSTUR, a hybrid news recommendation framework that integrates NAML for attentive multi-view news encoding and LSTUR for hierarchical user modeling, designed for training on limited data resources. Our approach leverages BERT-based embeddings to enhance semantic representation. We evaluate Co-NAML-LSTUR on two widely used benchmarks, MIND-small and MIND-large. Results show that our model significantly outperforms strong baselines, achieving improvements over NRMS by 1.55% in AUC and 1.15% in MRR, and over NAML by 2.45% in AUC and 1.71% in MRR. These findings highlight the effectiveness of our efficiency-focused hybrid model, which combines multi-view news modeling with dual-scale user representations for practical, resource-limited resources rather than a claim to absolute state-of-the-art (SOTA). The implementation of our model is publicly available at https://github.com/MinhNguyenDS/Co-NAML-LSTUR
翻译:新闻推荐系统通过提供个性化内容,在缓解信息过载方面发挥着关键作用。一个核心挑战在于联合建模新闻文章的多视角表征,并捕捉用户兴趣的动态、双尺度特性——涵盖短期与长期偏好。现有方法通常依赖单视角特征或对跨时间用户行为的建模不足。本文提出Co-NAML-LSTUR,一种混合新闻推荐框架,它整合了NAML用于注意力多视角新闻编码和LSTUR用于分层用户建模,专为有限数据资源下的训练而设计。我们的方法利用基于BERT的嵌入来增强语义表征。我们在两个广泛使用的基准数据集MIND-small和MIND-large上评估Co-NAML-LSTUR。结果表明,我们的模型显著优于强基线,在AUC上相比NRMS提升1.55%、在MRR上提升1.15%,相比NAML在AUC上提升2.45%、在MRR上提升1.71%。这些发现凸显了我们以效率为导向的混合模型的有效性,该模型结合了多视角新闻建模与双尺度用户表征,适用于实际、资源受限的场景,而非声称绝对的最先进(SOTA)性能。模型实现已公开于https://github.com/MinhNguyenDS/Co-NAML-LSTUR。