Random Utility Models (RUMs), which subsume Plackett-Luce model (PL) as a special case, are among the most popular models for preference learning. In this paper, we consider RUMs with features and their mixtures, where each alternative has a vector of features, possibly different across agents. Such models significantly generalize the standard PL and RUMs, but are not as well investigated in the literature. We extend mixtures of RUMs with features to models that generate incomplete preferences and characterize their identifiability. For PL, we prove that when PL with features is identifiable, its MLE is consistent with a strictly concave objective function under mild assumptions, by characterizing a bound on root-mean-square-error (RMSE), which naturally leads to a sample complexity bound. We also characterize identifiability of more general RUMs with features and propose a generalized RBCML to learn them. Our experiments on synthetic data demonstrate the effectiveness of MLE on PL with features with tradeoffs between statistical efficiency and computational efficiency. Our experiments on real-world data show the prediction power of PL with features and its mixtures.
翻译:随机功用模型(RUMS)是普拉克特-Luce模型(PL)的一个特例,它属于最受欢迎的优惠学习模式。在本文中,我们认为,具有特点及其混合物的RUMs(RUMs)是最受欢迎的优惠学习模式之一,每个替代品都有不同的特性矢量,可能在不同物剂中各有不同。这些模型大大地概括了标准的PL和RUMs(RUMs),但在文献中没有很好地加以调查。我们将RUMs的混合物扩大到产生不完全偏好和特征特征的模型。对于PLUP(PL),我们证明,在有特征可以识别时,其MLE(M)与在温和的假设下,其MLE(RMSE)与严格一致的客观功能功能一致。我们在现实世界数据上的实验显示PLP(RMS)与特征及其混合物的预测力。