Incorporating item-side information, such as category and brand, into sequential recommendation is a well-established and effective approach for improving performance. However, despite significant advancements, current models are generally limited by three key challenges: they often overlook the fine-grained temporal dynamics inherent in timestamps, exhibit vulnerability to noise in user interaction sequences, and rely on computationally expensive fusion architectures. To systematically address these challenges, we propose the Time-Aware Adaptive Side Information Fusion (TASIF) framework. TASIF integrates three synergistic components: (1) a simple, plug-and-play time span partitioning mechanism to capture global temporal patterns; (2) an adaptive frequency filter that leverages a learnable gate to denoise feature sequences adaptively, thereby providing higher-quality inputs for subsequent fusion modules; and (3) an efficient adaptive side information fusion layer, this layer employs a "guide-not-mix" architecture, where attributes guide the attention mechanism without being mixed into the content-representing item embeddings, ensuring deep interaction while ensuring computational efficiency. Extensive experiments on four public datasets demonstrate that TASIF significantly outperforms state-of-the-art baselines while maintaining excellent efficiency in training. Our source code is available at https://github.com/jluo00/TASIF.
翻译:将类别和品牌等物品侧辅助信息融入序列推荐是一种成熟且有效的性能提升方法。然而,尽管取得了显著进展,当前模型普遍受限于三个关键挑战:它们往往忽视时间戳中固有的细粒度时序动态,对用户交互序列中的噪声表现出脆弱性,并且依赖于计算成本高昂的融合架构。为系统性地应对这些挑战,我们提出了时序感知自适应辅助信息融合(TASIF)框架。TASIF集成了三个协同组件:(1) 一个简单、即插即用的时间跨度划分机制,用于捕捉全局时序模式;(2) 一个自适应频率滤波器,利用可学习门控自适应地对特征序列进行去噪,从而为后续融合模块提供更高质量的输入;(3) 一个高效的自适应辅助信息融合层,该层采用“引导而非混合”架构,其中属性引导注意力机制,而不混入用于内容表征的物品嵌入,在确保深度交互的同时保证了计算效率。在四个公开数据集上的大量实验表明,TASIF在保持优异训练效率的同时,显著优于最先进的基线方法。我们的源代码可在 https://github.com/jluo00/TASIF 获取。