In this work, we examine the advantages of using multiple types of behaviour in recommendation systems. Intuitively, each user has to do some implicit actions (e.g., click) before making an explicit decision (e.g., purchase). Previous studies showed that implicit and explicit feedback have different roles for a useful recommendation. However, these studies either exploit implicit and explicit behaviour separately or ignore the semantic of sequential interactions between users and items. In addition, we go from the hypothesis that a user's preference at a time is a combination of long-term and short-term interests. In this paper, we propose some Deep Learning architectures. The first one is Implicit to Explicit (ITE), to exploit users' interests through the sequence of their actions. And two versions of ITE with Bidirectional Encoder Representations from Transformers based (BERT-based) architecture called BERT-ITE and BERT-ITE-Si, which combine users' long- and short-term preferences without and with side information to enhance user representation. The experimental results show that our models outperform previous state-of-the-art ones and also demonstrate our views on the effectiveness of exploiting the implicit to explicit order as well as combining long- and short-term preferences in two large-scale datasets.
翻译:在这项工作中,我们研究了在建议系统中使用多种类型行为的好处。直观地说,每个用户必须在作出明确决定(例如购买)之前采取一些隐含行动(例如点击)。以前的研究表明,隐含和明确的反馈对于一项有用的建议具有不同的作用。然而,这些研究要么单独利用隐含和明确的行为,要么忽视用户和项目之间相继互动的语义。此外,我们所依据的假设是,用户在某一时间的偏好是长期和短期利益相结合的。我们在本文件中提议了一些深学习结构。第一个是隐含的(ITE),通过用户行动的顺序来利用用户的利益。两个版本的ITE与基于(基于ERT的)变换器的双向电解码器表示,称为BERT-ITE和BERT-ITE-Si, 将用户的长期和短期偏好与附带信息结合起来,以加强用户的代表性。实验结果表明,我们的模型超越了先前的状态和深层学习结构。第一个是隐含的(ITE),通过它们的行动序列来利用用户的利益。两个隐含的模型也展示了我们关于大规模数据观点的效果。