A challenging aspect of the bandit problem is that a stochastic reward is observed only for the chosen arm and the rewards of other arms remain missing. The dependence of the arm choice on the past context and reward pairs compounds the complexity of regret analysis. We propose a novel multi-armed contextual bandit algorithm called Doubly Robust (DR) Thompson Sampling employing the doubly-robust estimator used in missing data literature to Thompson Sampling with contexts (\texttt{LinTS}). Different from previous works relying on missing data techniques (\citet{dimakopoulou2019balanced}, \citet{kim2019doubly}), the proposed algorithm is designed to allow a novel additive regret decomposition leading to an improved regret bound with the order of $\tilde{O}(\phi^{-2}\sqrt{T})$, where $\phi^2$ is the minimum eigenvalue of the covariance matrix of contexts. This is the first regret bound of \texttt{LinTS} using $\phi^2$ without the dimension of the context, $d$. Applying the relationship between $\phi^2$ and $d$, the regret bound of the proposed algorithm is $\tilde{O}(d\sqrt{T})$ in many practical scenarios, improving the bound of \texttt{LinTS} by a factor of $\sqrt{d}$. A benefit of the proposed method is that it utilizes all the context data, chosen or not chosen, thus allowing to circumvent the technical definition of unsaturated arms used in theoretical analysis of \texttt{LinTS}. Empirical studies show the advantage of the proposed algorithm over \texttt{LinTS}.
翻译:土匪问题的一个具有挑战性的方面是: 仅对所选的手臂 { 显示一种随机奖赏 { 而其他手臂的奖赏仍然缺失。 手臂选择对过去背景的依赖性以及奖赏配对使遗憾分析更加复杂。 我们提出一个叫 Doubly Robust (DR) Thompson Sampling 的新型多武装背景土匪算法, 使用缺少数据文献中用于汤普森抽样(\ textt{ Lintts} ) 的双向估量。 与以往依赖缺失的数据技术的操作不同 (\ citet{ dimakopoulou2019 sumate},\ creatert{ keytral defectral t$} 和 $2\ kmundblistal2\\\ ddobly} 运算法中的第一个遗憾框框框框框框。 使用美元2\ kendral\\\ dq{ dal deal deal develys a ride, listrates of $ dies a $ dride, listrates a produde, list\ s produdeal_ d d d d dr) $= d d d disal.