Causal learning is the key to obtaining stable predictions and answering \textit{what if} problems in decision-makings. In causal learning, it is central to seek methods to estimate the average treatment effect (ATE) from observational data. The Double/Debiased Machine Learning (DML) is one of the prevalent methods to estimate ATE. However, the DML estimators can suffer from an \textit{error-compounding issue} and even give extreme estimates when the propensity scores are close to 0 or 1. Previous studies have overcome this issue through some empirical tricks such as propensity score trimming, yet none of the existing works solves it from a theoretical standpoint. In this paper, we propose a \textit{Robust Causal Learning (RCL)} method to offset the deficiencies of DML estimators. Theoretically, the RCL estimators i) satisfy the (higher-order) orthogonal condition and are as \textit{consistent and doubly robust} as the DML estimators, and ii) get rid of the error-compounding issue. Empirically, the comprehensive experiments show that: i) the RCL estimators give more stable estimations of the causal parameters than DML; ii) the RCL estimators outperform traditional estimators and their variants when applying different machine learning models on both simulation and benchmark datasets, and a mimic consumer credit dataset generated by WGAN.
翻译:causal 学习是获得稳定预测和回答决策问题的关键。 在因果学习中, 从观察数据中寻找方法来估计平均处理效果( ATE) 是关键所在。 双不偏向机器学习( DML) 是评估ATE的常用方法之一 。 然而, DML 估计者可能受到一个 textit{error-compound 问题 的困扰 。 即便在周期性分数接近0 或 1 时, 也可以给出极端估计 。 在因果学习中, 通过一些实验技巧, 诸如 偏向性分分数 3 mming 克服了这一问题, 但现有作品中没有任何一项能从理论角度解决问题 。 在本文中, 我们提出一个\ textit{ Robust Causal 学习( DML) 方法来抵消 DML 估计者缺陷。 从理论上讲, RCL 估测 i) 满足了( 高调) 或 或调性变现性 条件, 并且 以\ cext {contitudeal 和 viewy} 方法克服了这一问题 。 作为DMDMLLisal- 的学习者, 和 IMLserviewdrodustr 。