Debiased machine learning is a meta algorithm based on bias correction and sample splitting to calculate confidence intervals for functionals, i.e. scalar summaries, of machine learning algorithms. For example, an analyst may desire the confidence interval for a treatment effect estimated with a neural network. We provide a nonasymptotic debiased machine learning theorem that encompasses any global or local functional of any machine learning algorithm that satisfies a few simple, interpretable conditions. Formally, we prove consistency, Gaussian approximation, and semiparametric efficiency by finite sample arguments. The rate of convergence is $n^{-1/2}$ for global functionals, and it degrades gracefully for local functionals. Our results culminate in a simple set of conditions that an analyst can use to translate modern learning theory rates into traditional statistical inference. The conditions reveal a general double robustness property for ill posed inverse problems.
翻译:偏差型机器学习是一种基于偏差纠正和样本分割的元算法,用以计算机器学习算法的功能(即卡路里摘要)的可信度间隔。例如,分析师可能希望用神经网络估计的治疗效果的置信间隔。我们提供了一种非偏差型机器学习理论,它包含任何符合少数简单、可解释条件的机器学习算法的任何全球或地方功能。形式上,我们证明一致性、高斯近似和通过有限抽样参数的半对称效率。全球功能的趋同率是$ ⁇ 1/2美元,它优雅地降低当地功能。我们的结果最终形成了一套简单的条件,分析师可以用来将现代学习理论率转化为传统的统计推论。这些条件揭示出一种普遍的双强性属性,用于反向问题。