In one-class recommendation systems, the goal is to learn a model from a small set of interacted users and items and then identify the positively-related user-item pairs among a large number of pairs with unknown interactions. Most previous loss functions rely on dissimilar pairs of users and items, which are selected from the ones with unknown interactions, to obtain better prediction performance. This strategy introduces several challenges such as increasing training time and hurting the performance by picking "similar pairs with the unknown interactions" as dissimilar pairs. In this paper, the goal is to only use the similar set to train the models. We point out three trivial solutions that the models converge to when they are trained only on similar pairs: collapsed, partially collapsed, and shrinking solutions. We propose two terms that can be added to the objective functions in the literature to avoid these solutions. The first one is a hinge pairwise distance loss that avoids the shrinking and collapsed solutions by keeping the average pairwise distance of all the representations greater than a margin. The second one is an orthogonality term that minimizes the correlation between the dimensions of the representations and avoids the partially collapsed solution. We conduct experiments on a variety of tasks on public and real-world datasets. The results show that our approach using only similar pairs outperforms state-of-the-art methods using similar pairs and a large number of dissimilar pairs.
翻译:在一等建议系统中,目标是从一组互动用户和项目中学习一个模型,然后在数量众多的互不为人知的对口中确定与用户项目相配的正面关系。 以往的大多数损失功能都依赖于不同用户和项目,这些用户和项目是从互不为人知的对口中挑选出来的,以获得更好的预测性能。 这个战略提出了几个挑战,例如增加培训时间,通过将“与未知互动的对口相配”作为异对来损害业绩。 在本文中,目标是只使用类似的成套模式来培训模型。 我们指出,当模型只用相似的对口培训时,三个无关的小的解决方案是:崩溃、部分崩溃和缩小的解决方案。 我们建议两个术语可以添加到文献中的目标功能中以避免这些解决方案。 第一个是双向距离损失,通过将所有表达的平均值保持对齐距离大于差数来避免缩小和崩溃的解决办法。 第二个目标是使用一个或多对口术语,最大限度地减少演示中方层面的关联性关系,避免部分崩溃的解决方案。 我们用类似的方法对立式实验, 只能用类似的方法来显示我们真实世界的模型中的各种结果。