Recent arguments that machine learning (ML) is facing a reproducibility and replication crisis suggest that some published claims in ML research cannot be taken at face value. These concerns inspire analogies to the replication crisis affecting the social and medical sciences. They also inspire calls for the integration of statistical approaches to causal inference and predictive modeling. A deeper understanding of what reproducibility concerns in supervised ML research have in common with the replication crisis in experimental science puts the new concerns in perspective, and helps researchers avoid "the worst of both worlds," where ML researchers begin borrowing methodologies from explanatory modeling without understanding their limitations and vice versa. We contribute a comparative analysis of concerns about inductive learning that arise in causal attribution as exemplified in psychology versus predictive modeling as exemplified in ML. We identify themes that re-occur in reform discussions, like overreliance on asymptotic theory and non-credible beliefs about real-world data generating processes. We argue that in both fields, claims from learning are implied to generalize outside the specific environment studied (e.g., the input dataset or subject sample, modeling implementation, etc.) but are often impossible to refute due to undisclosed sources of variance in the learning pipeline. In particular, errors being acknowledged in ML expose cracks in long-held beliefs that optimizing predictive accuracy using huge datasets absolves one from having to consider a true data generating process or formally represent uncertainty in performance claims. We conclude by discussing risks that arise when sources of errors are misdiagnosed and the need to acknowledge the role of human inductive biases in learning and reform.
翻译:最近有人说,机器学习(ML)最近正面临一个可复制和复制的危机,这表明ML研究中一些公开的主张不能以表面价值来看待,这些关切激发了对影响社会和医学科学的复制危机的类比;它们也激发了对因果推断和预测建模的统计方法的整合;更深刻地理解监督ML研究中哪些可复制性关切与实验科学的复制危机有共同之处,从角度看待这些新关注,帮助研究人员避免“两个世界中最坏的”研究,使ML研究人员开始从解释性模型中借用一些方法,而没有理解其局限性,反之反之亦然。我们协助比较分析关于因果归因(如ML的例子所示的心理学和预测性模型所示)引起的因果关系的学习问题。我们找出了改革讨论中重新出现的主题,例如过度依赖无谓理论理论和对真实世界数据生成过程的不可信的信念。我们认为,在研究的具体环境之外(例如,输入数据集成或标定的抽样、建模的模型执行等,往往无法通过正式的预测来消除一个数据来源。