We consider estimating a low-dimensional parameter in an estimating equation involving high-dimensional nuisances that depend on the parameter. A central example is the efficient estimating equation for the (local) quantile treatment effect ((L)QTE) in causal inference, which involves as a nuisance the covariate-conditional cumulative distribution function evaluated at the quantile to be estimated. Debiased machine learning (DML) is a data-splitting approach to estimating high-dimensional nuisances using flexible machine learning methods, but applying it to problems with parameter-dependent nuisances is impractical. For (L)QTE, DML requires we learn the whole covariate-conditional cumulative distribution function. We instead propose localized debiased machine learning (LDML), which avoids this burdensome step and needs only estimate nuisances at a single initial rough guess for the parameter. For (L)QTE, LDML involves learning just two regression functions, a standard task for machine learning methods. We prove that under lax rate conditions our estimator has the same favorable asymptotic behavior as the infeasible estimator that uses the unknown true nuisances. Thus, LDML notably enables practically-feasible and theoretically-grounded efficient estimation of important quantities in causal inference such as (L)QTEs when we must control for many covariates and/or flexible relationships, as we demonstrate in empirical studies.
翻译:我们考虑估算一个低维参数,以估算一个取决于参数的高维扰变异性等方程。一个中心的例子就是对(本地)四分位处理效果((L)QTE)的因果推断值((LQTE))的高效估算方程,这涉及对在四分位中评估的共变条件累积分布功能的干扰。低偏差机器学习(DML)是一种数据分割方法,用灵活的机器学习方法来估算高维扰变异性,但将其应用于参数依赖扰动的问题是不切实际的。对于(L)QTE,DML要求我们学习整体的共变同条件累积分布函数(((L)QTE)需要我们学习整个共变异条件累积分布函数((((LDML)Q)QTE,LML(DML) 是一个数据分割法方法的标准任务,用于估算高维度的两种回归功能。我们证明,在缩缩率条件下,我们的缩缩缩缩缩率条件中,我们需要学习整个共变换机序(LMLML)学习的精确性研究,这必然有助于实际操作。