Breiman's classic paper casts data analysis as a choice between two cultures: data modelers and algorithmic modelers. Stated broadly, data modelers use simple, interpretable models with well-understood theoretical properties to analyze data. Algorithmic modelers prioritize predictive accuracy and use more flexible function approximations to analyze data. This dichotomy overlooks a third set of models $-$ mechanistic models derived from scientific theories (e.g., ODE/SDE simulators). Mechanistic models encode application-specific scientific knowledge about the data. And while these categories represent extreme points in model space, modern computational and algorithmic tools enable us to interpolate between these points, producing flexible, interpretable, and scientifically-informed hybrids that can enjoy accurate and robust predictions, and resolve issues with data analysis that Breiman describes, such as the Rashomon effect and Occam's dilemma. Challenges still remain in finding an appropriate point in model space, with many choices on how to compose model components and the degree to which each component informs inferences.
翻译:Breiman的经典论文将数据分析作为两种文化之间的一种选择:数据模型和算法模型。 广义地说,数据模型使用简单、可解释的模型来分析数据。 分析模型者将预测准确性作为优先事项,并使用更灵活的功能近似值来分析数据。 这一二分法忽略了从科学理论(例如,ODE/SDE模拟器)中得出的第三套模型的美元-美元机械模型。 机械模型将特定应用的科学知识编码为数据。 虽然这些类别代表了模型空间的极端点,但现代计算和算法工具使我们能够在这些点之间进行内推,产生灵活、可解释和科学知情的混合体,能够享受准确和可靠的预测,并解决布雷曼所描述的数据分析问题,如Rashomon效应和Occam的困境。 在寻找模型空间的适当点方面仍然存在挑战,在如何计算模型组成部分和每个组成部分的参考程度方面有许多选择。