Bias is a common problem inherent in recommender systems, which is entangled with users' preferences and poses a great challenge to unbiased learning. For debiasing tasks, the doubly robust (DR) method and its variants show superior performance due to the double robustness property, that is, DR is unbiased when either imputed errors or learned propensities are accurate. However, our theoretical analysis reveals that DR usually has a large variance. Meanwhile, DR would suffer unexpectedly large bias and poor generalization caused by inaccurate imputed errors and learned propensities, which usually occur in practice. In this paper, we propose a principled approach that can effectively reduce bias and variance simultaneously for existing DR approaches when the error imputation model is misspecified. In addition, we further propose a novel semi-parametric collaborative learning approach that decomposes imputed errors into parametric and nonparametric parts and updates them collaboratively, resulting in more accurate predictions. Both theoretical analysis and experiments demonstrate the superiority of the proposed methods compared with existing debiasing methods.
翻译:推荐者系统中常见的一个常见问题是,推荐者系统与用户的偏好纠缠在一起,对公正学习提出了巨大的挑战。对于贬低性格的任务,双重强(DR)方法及其变体显示由于双重强性属性而表现优异,也就是说,当估算错误或学到的倾向准确时,DR是不带偏见的。然而,我们的理论分析显示,DR通常有很大差异。与此同时,DR会因不准确的估算错误和学习的倾向而意外地遭受巨大的偏差和不准确的概括化。在本文中,我们提出了一个原则性办法,在错误估算模型定义错误时,可以同时有效减少现有DR方法的偏差和差异。此外,我们进一步提出一个新的半参数化合作学习方法,将估算错误分解成偏差和非参数部分,并通过协作加以更新,从而得出更准确的预测。理论性分析和实验都表明,拟议的方法优于现有的裁分法。</s>