We study the best-arm identification (BAI) problem with a fixed budget and contextual (covariate) information. In each round of an adaptive experiment, after observing contextual information, we choose a treatment arm using past observations and current context. Our goal is to identify the best treatment arm, which is a treatment arm with the maximal expected reward marginalized over the contextual distribution, with a minimal probability of misidentification. In this study, we consider a class of nonparametric bandit models that converge to location-shift models when the gaps go to zero. First, we derive lower bounds of the misidentification probability for a certain class of strategies and bandit models (probabilistic models of potential outcomes) under a small-gap regime. A small-gap regime is a situation where gaps of the expected rewards between the best and suboptimal treatment arms go to zero, which corresponds to one of the worst cases in identifying the best treatment arm. We then develop the ``Random Sampling (RS)-Augmented Inverse Probability weighting (AIPW) strategy,'' which is asymptotically optimal in the sense that the probability of misidentification under the strategy matches the lower bound when the budget goes to infinity in the small-gap regime. The RS-AIPW strategy consists of the RS rule tracking a target sample allocation ratio and the recommendation rule using the AIPW estimator.
翻译:我们用固定预算和背景(变量)信息研究最佳武器识别(BAI)问题。在每一轮适应性实验中,在观察背景信息之后,我们使用以往的观察和当前背景来选择一个治疗臂。我们的目标是确定最佳治疗臂,这是一个治疗臂,在环境分布上处于最大预期报酬边缘,最差的识别可能性最小。在本研究中,我们考虑的是一类非对称土匪模型,在差距降至零时,这些模型会与地点易位模型相融合。首先,我们发现在小规模制度下,某类战略和强盗模式(潜在结果的概率模型)的错误识别概率(潜在结果的概率模型)的界限较低。一个小型加宽制度是最佳和次优待遇武器之间预期报酬差距达零,相当于确定最佳治疗臂的最坏情况之一。然后我们开发了“兰多姆抽样(RS)”的反差比重战略(AIPW),在采用较低的设想性模式时,将区域投资方案的最佳战略的概率与区域投资方案的最佳战略相匹配。