Successful sequential recommendation systems rely on accurately capturing the user's short-term and long-term interest. Although Transformer-based models achieved state-of-the-art performance in the sequential recommendation task, they generally require quadratic memory and time complexity to the sequence length, making it difficult to extract the long-term interest of users. On the other hand, Multi-Layer Perceptrons (MLP)-based models, renowned for their linear memory and time complexity, have recently shown competitive results compared to Transformer in various tasks. Given the availability of a massive amount of the user's behavior history, the linear memory and time complexity of MLP-based models make them a promising alternative to explore in the sequential recommendation task. To this end, we adopted MLP-based models in sequential recommendation but consistently observed that MLP-based methods obtain lower performance than those of Transformer despite their computational benefits. From experiments, we observed that introducing explicit high-order interactions to MLP layers mitigates such performance gap. In response, we propose the Multi-Order Interaction (MOI) layer, which is capable of expressing an arbitrary order of interactions within the inputs while maintaining the memory and time complexity of the MLP layer. By replacing the MLP layer with the MOI layer, our model was able to achieve comparable performance with Transformer-based models while retaining the MLP-based models' computational benefits.
翻译:成功的顺序建议系统取决于准确捕捉用户的短期和长期利益。尽管基于变换的模型在顺序建议任务中取得了最新业绩,但通常需要序列长度的二次记忆和时间复杂性,因此难以从用户的长期利益中获取。另一方面,以线性记忆和时间复杂性著称的多层偏差(MLP)模型最近显示与基于变换的模型相比,在各种任务中取得了竞争性的结果。鉴于存在大量用户的行为史、以MLP为基础的模型的线性记忆和时间复杂性,因此这些模型在顺序建议任务中是探究的最有希望的替代方法。为此,我们在顺序建议中采用了基于MLP的模型,但始终认为,尽管具有计算效益,但以MLP为基础的方法的性能比变异器的低。我们从实验中发现,在基于变换的模型中引入明确的高级互动可以缩小这种业绩差距。作为回应,我们建议多层次互动(MOI)层层,它能够表达任意性变换的ML模型,同时保持了ML层次的稳定性。