Most NN-RSs focus on accuracy by building representations from the direct user-item interactions (e.g., user-item rating matrix), while ignoring the underlying relatedness between users and items (e.g., users who rate the same ratings for the same items should be embedded into similar representations), which is an ideological disadvantage. On the other hand, ME models directly employ inner products as a default loss function metric that cannot project users and items into a proper latent space, which is a methodological disadvantage. In this paper, we propose a supervised collaborative representation learning model - Magnetic Metric Learning (MML) - to map users and items into a unified latent vector space, enhancing the representation learning for NN-RSs. Firstly, MML utilizes dual triplets to model not only the observed relationships between users and items, but also the underlying relationships between users as well as items to overcome the ideological disadvantage. Specifically, a modified metric-based dual loss function is proposed in MML to gather similar entities and disperse the dissimilar ones. With MML, we can easily compare all the relationships (user to user, item to item, user to item) according to the weighted metric, which overcomes the methodological disadvantage. We conduct extensive experiments on four real-world datasets with large item space. The results demonstrate that MML can learn a proper unified latent space for representations from the user-item matrix with high accuracy and effectiveness, and lead to a performance gain over the state-of-the-art RS models by an average of 17%.
翻译:多数NN-RS侧重于准确性,通过直接用户-项目互动(例如用户-项目评级矩阵)进行陈述,而忽视用户和项目之间的基本关联性(例如,对相同项目评分相同的用户应当嵌入类似的表达式),这是一种意识形态上的劣势;另一方面,教育部模型直接将内产产品作为默认损失函数值,不能将用户和项目投入适当的潜藏空间,这是一种方法上的劣势;在本文件中,我们提议了一种监督协作学习模式-磁计量学习(磁计量学习),将用户和项目映入一个统一的潜向矢量空间,加强NNN-RS用户和项目的代表性学习。首先,MML利用双重三重模型,不仅模拟所观察到的用户和项目之间的关系,而且模拟用户之间的根本关系,以及克服意识形态劣势的项目。具体地,在MMML模型中,我们很容易将所有关系(用户与用户的关系、项目、用户与项目的关系和项目)进行对比,提高NNNRS的代表性,我们用一个高的准确性模型,我们用一个基础数据模型来学习一个高空基模型。