Transformers emerged as powerful methods for sequential recommendation. However, existing architectures often overlook the complex dependencies between user preferences and the temporal context. In this short paper, we introduce MOJITO, an improved Transformer sequential recommender system that addresses this limitation. MOJITO leverages Gaussian mixtures of attention-based temporal context and item embedding representations for sequential modeling. Such an approach permits to accurately predict which items should be recommended next to users depending on past actions and the temporal context. We demonstrate the relevance of our approach, by empirically outperforming existing Transformers for sequential recommendation on several real-world datasets.
翻译:转换器出现为时序推荐提供了强大的方法。然而,现有的架构经常忽视用户偏好和时间环境之间的复杂依赖关系。在本篇短文中,我们介绍了MOJITO,一种改进的转换器时序推荐系统,可解决这个限制。MOJITO利用基于注意力的时间上下文和项目嵌入表示的高斯混合来进行时序建模。这种方法使得可以根据过去的行动和时间背景精确预测应向用户推荐哪些项目。我们通过实证研究在几个真实世界数据集上的实验性能表明了我们方法的相关性,比现有的转换器时序推荐方法表现更好。