This work explores the reproducibility of CFGAN. CFGAN and its family of models (TagRec, MTPR, and CRGAN) learn to generate personalized and fake-but-realistic rankings of preferences for top-N recommendations by using previous interactions. This work successfully replicates the results published in the original paper and discusses the impact of certain differences between the CFGAN framework and the model used in the original evaluation. The absence of random noise and the use of real user profiles as condition vectors leaves the generator prone to learn a degenerate solution in which the output vector is identical to the input vector, therefore, behaving essentially as a simple autoencoder. The work further expands the experimental analysis comparing CFGAN against a selection of simple and well-known properly optimized baselines, observing that CFGAN is not consistently competitive against them despite its high computational cost. To ensure the reproducibility of these analyses, this work describes the experimental methodology and publishes all datasets and source code.
翻译:这项工作探索了CFGAN. CFGAN及其模型组(TagRec、MTPR和CRGAN)利用以前的相互作用,学会产生个人化的和假的但现实的首选等级,从而产生对上-N建议的偏好。这项工作成功地复制了原始文件中公布的结果,并讨论了CFGAN框架与原评价中使用的模式之间某些差异的影响。没有随机噪音,并且使用真实的用户简介作为条件矢量,使得生成器容易学习一种退化的解决方案,因此,输出矢量与输入矢量完全相同。这项工作进一步扩展了对CFGAN与选定的简单和广为人知的优化基线进行比较的实验性分析,认为CFGAN尽管计算成本高,但与这些基准的竞争力并不一致。为了确保这些分析的可重复性,这项工作描述了实验方法,并公布了所有数据集和源代码。