The past two decades have witnessed the rapid development of personalized recommendation techniques. Despite the significant progress made in both research and practice of recommender systems, to date, there is a lack of a widely-recognized benchmarking standard in this field. Many of the existing studies perform model evaluations and comparisons in an ad-hoc manner, for example, by employing their own private data splits or using a different experimental setting. However, such conventions not only increase the difficulty in reproducing existing studies, but also lead to inconsistent experimental results among them. This largely limits the credibility and practical value of research results in this field. To tackle these issues, we present an initiative project aimed for open benchmarking for recommender systems. In contrast to some earlier attempts towards this goal, we take one further step by setting up a standardized benchmarking pipeline for reproducible research, which integrates all the details about datasets, source code, hyper-parameter settings, running logs, and evaluation results. The benchmark is designed with comprehensiveness and sustainability in mind. It spans both matching and ranking tasks, and also allows anyone to easily follow and contribute. We believe that our benchmark could not only reduce the redundant efforts of researchers to re-implement or re-run existing baselines, but also drive more solid and reproducible research on recommender systems.
翻译:过去二十年来,个人化建议技术迅速发展,尽管迄今为止在建议系统的研究和实践方面都取得了显著进展,但在这一领域缺乏广泛公认的基准标准,许多现有研究以临时方式进行模型评估和比较,例如,利用自己的私人数据分割或使用不同的实验环境。然而,这些公约不仅增加了复制现有研究的难度,而且还导致这些研究之间实验结果不一致。这在很大程度上限制了该领域研究成果的可信度和实际价值。为了解决这些问题,我们提出了一个旨在为建议系统建立公开基准的倡议项目。与早先为实现这一目标所作的一些尝试相比,我们又迈出了一步,为再生研究建立一个标准化的基准管道,将关于数据集、源代码、超比对称设置、运行日志和评价结果的所有细节综合起来。基准的设计既全面又具有可持续性。它涉及该领域研究成果的匹配和排序,也使任何人能够轻易地遵循和作出贡献。我们认为,我们的基准不仅能够减少现有研究系统的冗余努力,而且能够建议对现有研究系统进行重新实施。我们认为,我们的基准不仅能够减少现有研究的重复努力,而且能够减少现有的基准。