In a recent SIGMOD paper titled "Debunking the Myths of Influence Maximization: An In-Depth Benchmarking Study", Arora et al. [1] undertake a performance benchmarking study of several well-known algorithms for influence maximization. In the process, they contradict several published results, and claim to have unearthed and debunked several "myths" that existed around the research of influence maximization. It is the goal of this article to examine their claims objectively and critically, and refute the erroneous ones. Our investigation discovers that first, the overall experimental methodology in Arora et al. [1] is flawed and leads to scientifically incorrect conclusions. Second, the paper [1] is riddled with issues specific to a variety of influence maximization algorithms, including buggy experiments, and draws many misleading conclusions regarding those algorithms. Importantly, they fail to appreciate the trade-off between running time and solution quality, and did not incorporate it correctly in their experimental methodology. In this article, we systematically point out the issues present in [1] and refute 11 of their misclaims.
翻译:在最近一份题为“揭开影响最大化的神话:内部基准研究”的SIGMOD论文中,Arora等人[1]对若干著名的影响力最大化算法进行了业绩基准研究。在这一过程中,这些算法与一些已公布的结果相矛盾,声称在影响最大化研究中发现了一些“神话”,并揭开了一些围绕影响最大化研究存在的“神话”。本文章的目的是客观和批判地审查他们的主张,驳斥错误的主张。我们的调查发现,首先,阿罗拉等人的整体实验方法[1]有缺陷,并导致科学上不正确的结论。第二,[1]号文件充满了各种影响最大化算法的具体问题,包括错误的实验,并对这些算法得出了许多误导性结论。重要的是,它们没有理解运行时间和解决方案质量之间的交易,没有正确地将其纳入实验方法。在文章中,我们系统地指出Arora等人(1)的总体实验方法存在问题,并驳斥了其中的11项错误说法。