This paper is about the ability and means to root-n consistently and efficiently estimate linear, mean square continuous functionals of a high dimensional, approximately sparse regression. Such objects include a wide variety of interesting parameters such as the covariance between two regression residuals, a coefficient of a partially linear model, an average derivative, and the average treatment effect. We give lower bounds on the convergence rate of estimators of such objects and find that these bounds are substantially larger than in a low dimensional, semiparametric setting. We also give automatic debiased machine learners that are $1/\sqrt{n}$ consistent and asymptotically efficient under minimal conditions. These estimators use no cross-fitting or a special kind of cross-fitting to attain efficiency with faster than $n^{-1/4}$ convergence of the regression. This rate condition is substantially weaker than the product of convergence rates of two functions being faster than $1/\sqrt{n},$ as required for many other debiased machine learners.
翻译:本文涉及如何持续和高效地根根估算线性、平均平方连续功能的直线性、 高维、 低回归度、 低回归度。 这些对象包括许多有趣的参数, 如两个回归残留物的共差、 部分线性模型的系数、 平均衍生物 和平均处理效果。 我们对此类天体的测算器的趋同率设定了较低的界限, 并发现这些界限大大大于一个低维、 半参数设置中的界限。 我们还给自动脱差机器学习者提供$的自动脱差, 在最低条件下为$/\ sqrt{n} 一致且非现性效率。 这些估计者没有使用交叉配置或特殊类型的交叉配置来达到效率, 其回归率的趋同速度比 $\-1/4} 美元 。 这一比率条件大大弱于两个函数的趋同率比 $/\ qrt{n} 。 对于许多其他被贬损的机器学习者来说, 需要1/\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\