Much like other learning-based models, recommender systems can be affected by biases in the training data. While typical evaluation metrics (e.g. hit rate) are not concerned with them, some categories of final users are heavily affected by these biases. In this work, we propose using multiple triplet losses terms to extract meaningful and robust representations of users and items. We empirically evaluate the soundness of such representations through several "bias-aware" evaluation metrics, as well as in terms of stability to changes in the training set and agreement of the predictions variance w.r.t. that of each user.
翻译:与其他以学习为基础的模式大相径庭,建议者系统可能受到培训数据偏见的影响。虽然典型的评价指标(如打击率)与它们无关,但某些类别的最终使用者受到这些偏见的严重影响。在这项工作中,我们提议使用多重三重损失术语来吸引有意义和有力的用户和项目表述。我们通过若干“有偏见的”评价指标,从稳定性的角度评价这种表述的正确性,以及从培训成套指标的变化和每个使用者预测差异的一致性的角度评价这种表述的正确性。