Off-policy evaluation and learning (OPE/L) use offline observational data to make better decisions, which is crucial in applications where online experimentation is limited. However, depending entirely on logged data, OPE/L is sensitive to environment distribution shifts -- discrepancies between the data-generating environment and that where policies are deployed. \citet{si2020distributional} proposed distributionally robust OPE/L (DROPE/L) to address this, but the proposal relies on inverse-propensity weighting, whose estimation error and regret will deteriorate if propensities are nonparametrically estimated and whose variance is suboptimal even if not. For standard, non-robust, OPE/L, this is solved by doubly robust (DR) methods, but they do not naturally extend to the more complex DROPE/L, which involves a worst-case expectation. In this paper, we propose the first DR algorithms for DROPE/L with KL-divergence uncertainty sets. For evaluation, we propose Localized Doubly Robust DROPE (LDR$^2$OPE) and show that it achieves semiparametric efficiency under weak product rates conditions. Thanks to a localization technique, LDR$^2$OPE only requires fitting a small number of regressions, just like DR methods for standard OPE. For learning, we propose Continuum Doubly Robust DROPL (CDR$^2$OPL) and show that, under a product rate condition involving a continuum of regressions, it enjoys a fast regret rate of $\mathcal{O}\left(N^{-1/2}\right)$ even when unknown propensities are nonparametrically estimated. We empirically validate our algorithms in simulations and further extend our results to general $f$-divergence uncertainty sets.
翻译:离线观测数据和学习(OPE/L)使用离线观测数据来做出更好的决定(OPE/L),这在在线实验有限的应用中至关重要。然而,完全取决于登录数据,OPE/L对环境分布变化十分敏感 -- -- 数据生成环境与政策部署环境之间的差异。\citet{si2020分流}拟议分配上强大的OPE/L(DROE/L)来解决这个问题,但该提案依赖于反向偏差加权,其估计误差和遗憾将会恶化。对于标准数据、非罗布特、OPE/L,这通过双倍强势(DR)方法来解决,但是它们并不自然延伸至更复杂的DROPE/L(DROE/L),这需要最坏的预期。在本文件中,我们建议DROPE/L(O-DR) 通用的首次DR算法,在KLL-DR(O-DR) 的不确定性组中,我们建议DRUS-R) 本地化的DRUP(L) 和半ROPER(LO(L%R) 的算算算法下, 的算法下, 需要一个最低的精确的排序。