Dynamic treatment rules or policies are a sequence of decision functions over multiple stages that are tailored to individual features. One important class of treatment policies for practice, namely multi-stage stationary treatment policies, prescribe treatment assignment probabilities using the same decision function over stages, where the decision is based on the same set of features consisting of both baseline variables (e.g., demographics) and time-evolving variables (e.g., routinely collected disease biomarkers). Although there has been extensive literature to construct valid inference for the value function associated with the dynamic treatment policies, little work has been done for the policies themselves, especially in the presence of high dimensional feature variables. We aim to fill in the gap in this work. Specifically, we first estimate the multistage stationary treatment policy based on an augmented inverse probability weighted estimator for the value function to increase the asymptotic efficiency, and further apply a penalty to select important feature variables. We then construct one-step improvement of the policy parameter estimators. Theoretically, we show that the improved estimators are asymptotically normal, even if nuisance parameters are estimated at a slow convergence rate and the dimension of the feature variables increases exponentially with the sample size. Our numerical studies demonstrate that the proposed method has satisfactory performance in small samples, and that the performance can be improved with a choice of the augmentation term that approximates the rewards or minimizes the variance of the value function.
翻译:动态处理规则或政策是针对个别特点的多个阶段决策功能的序列。一个重要的处理政策实践政策类别,即多阶段固定处理政策,规定处理分配概率,使用相同的各阶段决定功能,决定基于由基线变量(如人口)和时间变化变量(如例行收集的疾病生物标记)组成的一组相同特征。虽然有大量文献为动态处理政策相关的价值函数构建有效的推论,但政策本身没有做多少工作,特别是在存在高维度特征变量的情况下。我们的目标是填补这项工作的空白。具体地说,我们首先根据数值函数的反概率加权估计多阶段固定处理政策,以提高无源效率,并进一步对一些重要特征变量适用惩罚。我们随后对政策参数估算值进行了一步骤的改进。理论上,我们表明,改进的估算值本身是静度,特别是在存在高维度特征变量变量变量变异的情况下。我们的目标是填补这项工作的空白。具体而言,我们首先根据价值函数增加的偏差加权估计多阶段处理政策政策政策政策政策,然后对政策参数进行一步骤的改进。我们从理论上表明,改进的估算值本身是静不作考虑,即使我们的目的是填补了高维特性变量变量。我们的目标是要填补这项工作中的空白,我们的目标是填补这项工作中的空白,如果在慢度测测测测测测测度参数中,那么,那么,那么,我们所测测测测测度参数的精确性测度的精确度的精确度的精确度的测度的精确度的测度的测度的测度的测度是能够显示性测度的精确度参数的精确度的精确度是测量度的精确度的精确度的测算值是测量度值是测量度的比值的比值比值比值的比值是测量度的比值值值比值比值,我们测测测测测测测测测测测测测测测测测测测测测测测度的比值比值是测度是测测测测测度的比值比值是测测测测测测算,而测测算值的比值的比值的比值的比值的比值的比值的比值的比值的比值的比值的比值是测算值的比值的比值的比值的比值的比值的比值的比值的比值是测