I propose a practical procedure based on bias correction and sample splitting to calculate confidence intervals for functionals of generic kernel methods, i.e. nonparametric estimators learned in a reproducing kernel Hilbert space (RKHS). For example, an analyst may desire confidence intervals for functionals of kernel ridge regression. I propose a bias correction that mirrors kernel ridge regression. The framework encompasses (i) evaluations over discrete domains, (ii) derivatives over continuous domains, (iii) treatment effects of discrete treatments, and (iv) incremental treatment effects of continuous treatments. For the target quantity, whether it is (i)-(iv), I prove root-n consistency, Gaussian approximation, and semiparametric efficiency by finite sample arguments. I show that the classic assumptions of RKHS learning theory also imply inference.
翻译:我提议了一个基于偏差纠正和样本分割的实用程序,以计算通用内核方法功能(即,在复制内核Hilbert空间(RKHS)中学得的非参数估计器)的信任度,例如,分析师可能希望内核脊回归功能的置信度间隔,我提议了一个反映内核脊回归的偏差纠正器,框架包括:(一) 对离散域的评价,(二) 连续域的衍生物,(三) 离散处理的处理效果,(四) 连续处理的递增处理效果。对于目标数量,无论(一)至(四),我证明根值的一致性,高斯近似值,以及有限抽样参数的半对称效率。我表明,RKHS学习理论的典型假设也意味着推论。