We study the Bayesian regret of the renowned Thompson Sampling algorithm in contextual bandits with binary losses and adversarially-selected contexts. We adapt the information-theoretic perspective of Russo and Van Roy [2016] to the contextual setting by introducing a new concept of information ratio based on the mutual information between the unknown model parameter and the observed loss. This allows us to bound the regret in terms of the entropy of the prior distribution through a remarkably simple proof, and with no structural assumptions on the likelihood or the prior. The extension to priors with infinite entropy only requires a Lipschitz assumption on the log-likelihood. An interesting special case is that of logistic bandits with d-dimensional parameters, K actions, and Lipschitz logits, for which we provide a $\widetilde{O}(\sqrt{dKT})$ regret upper-bound that does not depend on the smallest slope of the sigmoid link function.
翻译:我们研究了著名的汤普森抽样算法在具有二进制损失和对抗性选择背景的背景强盗中的巴伊西亚人的遗憾。我们根据未知模型参数和观察到的损失之间的相互信息,引入新的信息比率概念,将鲁索和范罗伊[2016年]的信息理论视角与背景环境相适应。这使我们能够通过一个非常简单的证据,在对可能性或先前的可能性没有结构性假设的情况下,用先前的无穷的昆虫来捆绑先前的遗憾。只有对日志相似性的利普施茨假设才必要。一个有趣的特殊案例是具有二维参数的后勤强盗、K动作和利普施奇茨日志记录,我们为此提供了一种$\全方位(sqrt{dK}O}(sqrt{dKT})$的遗憾上限,它并不取决于微小链接功能的最小斜坡度。