Recently, one critical issue looms large in the field of recommender systems -- there are no effective benchmarks for rigorous evaluation -- which consequently leads to unreproducible evaluation and unfair comparison. We, therefore, conduct studies from the perspectives of practical theory and experiments, aiming at benchmarking recommendation for rigorous evaluation. Regarding the theoretical study, a series of hyper-factors affecting recommendation performance throughout the whole evaluation chain are systematically summarized and analyzed via an exhaustive review on 141 papers published at eight top-tier conferences within 2017-2020. We then classify them into model-independent and model-dependent hyper-factors, and different modes of rigorous evaluation are defined and discussed in-depth accordingly. For the experimental study, we release DaisyRec 2.0 library by integrating these hyper-factors to perform rigorous evaluation, whereby a holistic empirical study is conducted to unveil the impacts of different hyper-factors on recommendation performance. Supported by the theoretical and experimental studies, we finally create benchmarks for rigorous evaluation by proposing standardized procedures and providing performance of ten state-of-the-arts across six evaluation metrics on six datasets as a reference for later study. Overall, our work sheds light on the issues in recommendation evaluation, provides potential solutions for rigorous evaluation, and lays foundation for further investigation.
翻译:最近,在建议系统领域出现了一个重大问题 -- -- 没有严格的评价的有效基准 -- -- 因此导致无法复制的评价和不公平的比较。因此,我们从实际理论和实验的角度进行研究,目的是为严格的评价提出基准建议。关于理论研究,通过对2017-2020年八届最高级会议发表的141份文件进行彻底审查,系统地总结和分析对整个评价链中影响建议业绩的一系列超因素。然后,我们将它们分类为独立和依赖模型的高因子,并据此界定和深入讨论严格评价的不同模式。关于实验研究,我们公布Daisrec 2.0图书馆,将这些超因因素综合起来进行严格的评价,从而进行一项全面的经验研究,以揭示不同高因因素对建议业绩的影响。在理论和实验研究的支持下,我们最终通过提出标准化程序,提供六种评价指标的十种现状,作为今后研究的参考。关于六种评价指标的十种状况,我们的工作为今后评估提供了基础,为评估提供了深入的基础。