Being able to duplicate published research results is an important process of conducting research whether to build upon these findings or to compare with them. This process is called "replicability" when using the original authors' artifacts (e.g., code), or "reproducibility" otherwise (e.g., re-implementing algorithms). Reproducibility and replicability of research results have gained a lot of interest recently with assessment studies being led in various fields, and they are often seen as a trigger for better result diffusion and transparency. In this work, we assess replicability in Computer Graphics, by evaluating whether the code is available and whether it works properly. As a proxy for this field we compiled, ran and analyzed 151 codes out of 374 papers from 2014, 2016 and 2018 SIGGRAPH conferences. This analysis shows a clear increase in the number of papers with available and operational research codes with a dependency on the subfields, and indicates a correlation between code replicability and citation count. We further provide an interactive tool to explore our results and evaluation data.
翻译:能够复制出版的研究成果,是进行研究的一个重要过程,无论是利用这些研究结果,还是与之进行比较。当使用原作者的文物(如代码)或“可复制性”(如重新实施算法)时,这一过程被称为“可复制性”,或“可复制性”(如重新实施算法),研究结果的可复制性和可复制性最近随着不同领域的评估研究的主导而引起极大兴趣,这些研究往往被视为促进更好结果的传播和透明度的触发因素。在这项工作中,我们通过评估代码是否可用和它是否运作得当来评估计算机图形的可复制性。作为这个领域的代理,我们从2014年、2016年和2018年SIGGGRAPH会议以来汇编、运行和分析了374份文件中的151个代码。这一分析表明,现有文件和业务研究代码的数量明显增加,依赖子领域,并表明代码的可复制性和引用性与引用性之间的关联性。我们还提供了一种互动工具来探索我们的成果和评价数据。