Prediction models can exhibit sensitivity with respect to training data: small changes in the training data can produce models that assign conflicting predictions to individual data points during test time. In this work, we study this sensitivity in recommender systems, where users' recommendations are drastically altered by minor perturbations in other unrelated users' interactions. We introduce a measure of stability for recommender systems, called Rank List Sensitivity (RLS), which measures how rank lists generated by a given recommender system at test time change as a result of a perturbation in the training data. We develop a method, CASPER, which uses cascading effect to identify the minimal and systematical perturbation to induce higher instability in a recommender system. Experiments on four datasets show that recommender models are overly sensitive to minor perturbations introduced randomly or via CASPER - even perturbing one random interaction of one user drastically changes the recommendation lists of all users. Importantly, with CASPER perturbation, the models generate more unstable recommendations for low-accuracy users (i.e., those who receive low-quality recommendations) than high-accuracy ones.
翻译:预测模型可以显示培训数据方面的敏感度:培训数据中的小变化可以产生模型,在测试期间将相互矛盾的预测分派给各个数据点。在这项工作中,我们研究建议者系统中的这种敏感度,在建议者系统中,用户的建议会因其他不相干用户互动的轻微扰动而急剧改变。我们为推荐者系统引入了一定程度的稳定性,称为Rank List Sensitivity(RLS),它测量了某个推荐者系统在测试时因培训数据受到干扰而导致的变动。我们开发了一种方法,即CASPER,它使用粘结效应来确定最小和系统的扰动性,以便在推荐者系统中引起更大的不稳定。对四个数据集的实验显示,推荐者模型对随机引入或通过CASPER(CSPER)引入的轻微扰动过敏度,甚至对一个用户随机的相互作用对所有用户的建议清单发生剧烈变化。重要的是,由于CSPER 扰动,模型为低准确性用户(即那些收到低质量建议的人)比高准确性用户产生更不稳定的建议。