We propose the conditional predictive impact (CPI), a consistent and unbiased estimator of the association between one or several features and a given outcome, conditional on a reduced feature set. Building on the knockoff framework of Cand\`es et al. (2018), we develop a novel testing procedure that works in conjunction with any valid knockoff sampler, supervised learning algorithm, and loss function. The CPI can be efficiently computed for high-dimensional data without any sparsity constraints. We demonstrate convergence criteria for the CPI and develop statistical inference procedures for evaluating its magnitude, significance, and precision. These tests aid in feature and model selection, extending traditional frequentist and Bayesian techniques to general supervised learning tasks. The CPI may also be applied in causal discovery to identify underlying multivariate graph structures. We test our method using various algorithms, including linear regression, neural networks, random forests, and support vector machines. Empirical results show that the CPI compares favorably to alternative variable importance measures and other nonparametric tests of conditional independence on a diverse array of real and simulated datasets. Simulations confirm that our inference procedures successfully control Type I error and achieve nominal coverage probability. Our method has been implemented in an R package, cpi, which can be downloaded from https://github.com/dswatson/cpi.
翻译:我们提出有条件的预测影响(CPI),这是对一个或几个特点和特定结果之间关联的一致和公正的估计,条件是要降低功能集。以Cand ⁇ es等人(2018年)的淘汰框架为基础,我们开发了一个创新的测试程序,与任何有效的传球采样器、受监督的学习算法和损失函数一起工作。对于高维数据,可以有效地计算CPI,而没有任何气候性限制。我们展示了消费价格指数的趋同标准,并制定了评估其大小、重要性和精确度的统计推论程序。这些测试在特征和模型选择方面提供了帮助,将传统的常客和巴耶西亚技术推广到一般监督的学习任务。在因果发现中也可以应用CPI来确定基本的多变图结构。我们用各种算法测试我们的方法,包括线性回归、神经网络、随机森林和支持矢量机器。我们的经验显示,CPI优于替代的可变重要性措施,以及在各种真实和模拟数据集上进行其他非对等条件独立的测试。模拟测试证实,这些测试帮助将传统的常客技术和贝技术推广到一般受监督的学习的任务范围。我们从I型程序成功控制。