The constant development of new data analysis methods in many fields of research is accompanied by an increasing awareness that these new methods often perform better in their introductory paper than in subsequent comparison studies conducted by other researchers. We attempt to explain this discrepancy by conducting a systematic experiment that we call "cross-design validation of methods". In the experiment, we select two methods designed for the same data analysis task, reproduce the results shown in each paper, and then re-evaluate each method based on the study design (i.e., data sets, competing methods, and evaluation criteria) that was used to show the abilities of the other method. We conduct the experiment for two data analysis tasks, namely cancer subtyping using multi-omic data and differential gene expression analysis. Three of the four methods included in the experiment indeed perform worse when they are evaluated on the new study design, which is mainly caused by the different data sets. Apart from illustrating the many degrees of freedom existing in the assessment of a method and their effect on its performance, our experiment suggests that the performance discrepancies between original and subsequent papers may not only be caused by the non-neutrality of the authors proposing the new method but also by differences regarding the level of expertise and field of application.
翻译:在许多研究领域不断开发新的数据分析方法的同时,人们也日益认识到,这些新方法的介绍性文件往往比其他研究人员随后进行的比较性研究效果更好。我们试图通过进行称为“方法的交叉设计验证”的系统实验来解释这种差异。在试验中,我们选择了两种为同一数据分析任务设计的方法,复制每份文件中显示的结果,然后根据研究设计(即数据集、相互竞争的方法和评价标准)重新评价每一种方法,这些方法用来显示另一种方法的能力。我们进行两项数据分析任务,即利用多组数据和基因表达分析进行癌症分型处理。试验中包括的四种方法中的三种在新研究设计上被评估时效果确实更糟,这主要是由不同的数据集造成的。除了说明在评估一种方法时存在许多程度的自由以及评估方法对其性能的影响之外,我们的实验表明,原始和以后的文件之间的性能差异可能不仅仅是由于提出新方法的作者的不中立性,而且还由于在应用领域的专门知识水平上的差异造成的。