Orthogonal statistical learning and double machine learning have emerged as general frameworks for two-stage statistical prediction in the presence of a nuisance component. We establish non-asymptotic bounds on the excess risk of orthogonal statistical learning methods with a loss function satisfying a self-concordance property. Our bounds improve upon existing bounds by a dimension factor while lifting the assumption of strong convexity. We illustrate the results with examples from multiple treatment effect estimation and generalized partially linear modeling.
翻译:常规统计学习和双机学习已成为在骚扰成分存在的情况下进行两阶段统计预测的一般框架。我们建立了关于垂直统计学习方法的过度风险的非无防线,其损失功能满足了自调财产。我们在解除强烈共性假设的同时,通过一个维度因素改善了现有界限。我们用多种治疗效果估计和一般部分线性建模的例子来说明这些结果。