In bandit algorithms, the randomly time-varying adaptive experimental design makes it difficult to apply traditional limit theorems to off-policy evaluation of the treatment effect. Moreover, the normal approximation by the central limit theorem becomes unsatisfactory for lack of information due to the small sample size of the inferior arm. To resolve this issue, we introduce a backwards asymptotic expansion method and prove the validity of this scheme based on the partial mixing, that was originally introduced for the expansion of the distribution of a functional of a jump-diffusion process in a random environment. The theory is generalized in this paper to incorporate the backward propagation of random functions in the bandit algorithm. Besides the analytical validation, the simulation studies also support the new method. Our formulation is general and applicable to nonlinearly parametrized differentiable statistical models having an adaptive design.
翻译:在赌博算法中,随机时变的自适应实验设计使得对于治疗效果的离线策略评估难以应用传统的极限定理。而且,由于较小的不良手臂样本量而缺乏信息,中心极限定理的正态近似变得不令人满意。为解决这个问题,我们引入了一种反向渐进展开方法,并基于部分混合证实了此方案的有效性。部分混合最初是为了在随机环境中的跳跃扩散过程的函数展开的分布中引入的。本文将该理论推广到赌博算法中的随机函数的反向传播。除了分析验证之外,模拟研究也支持这种新方法。我们的公式是通用的,并适用于具有自适应设计的非线性参数化可微的统计模型。