Metric learning has attracted extensive interest for its ability to provide personalized recommendations based on the importance of observed user-item interactions. Current metric learning methods aim to push negative items away from the corresponding users and positive items by an absolute geometrical distance margin. However, items may come from imbalanced categories with different intra-class variations. Thus, the absolute distance margin may not be ideal for estimating the difference between user preferences over imbalanced items. To this end, we propose a new method, named discrete scale-invariant metric learning (DSIML), by adding binary constraints to users and items, which maps users and items into binary codes of a shared Hamming subspace to speed up the online recommendation. Specifically, we firstly propose a scale-invariant margin based on angles at the negative item points in the shared Hamming subspace. Then, we derive a scale-invariant triple hinge loss based on the margin. To capture more preference difference information, we integrate a pairwise ranking loss into the scale-invariant loss in the proposed model. Due to the difficulty of directly optimizing the mixed integer optimization problem formulated with \textit{log-sum-exp} functions, we seek to optimize its variational quadratic upper bound and learn hash codes with an alternating optimization strategy. Experiments on benchmark datasets clearly show that our proposed method is superior to competitive metric learning and hashing-based baselines for recommender systems.
翻译:度量学习因其能够基于观察到的用户-物品交互重要性提供个性化推荐而受到广泛关注。现有度量学习方法旨在通过绝对几何距离间隔将负向物品从相应用户及正向物品中推离。然而,物品可能来自具有不同类内差异的不平衡类别,因此绝对距离间隔可能不适用于估计用户对不平衡物品的偏好差异。为此,我们提出一种名为离散尺度不变度量学习(DSIML)的新方法,通过对用户和物品施加二元约束,将用户和物品映射到共享汉明子空间的二进制编码中以加速在线推荐。具体而言,我们首先在共享汉明子空间中基于负向物品点的角度提出尺度不变间隔。随后,基于该间隔推导出尺度不变三元铰链损失。为捕捉更多偏好差异信息,我们在所提模型中整合了成对排序损失至尺度不变损失中。由于直接优化由\textit{log-sum-exp}函数构成的混合整数优化问题存在困难,我们通过优化其变分二次上界,并采用交替优化策略学习哈希编码。在基准数据集上的实验表明,所提方法明显优于推荐系统中具有竞争力的度量学习方法及基于哈希的基线模型。