We investigate finite stochastic partial monitoring, which is a general model for sequential learning with limited feedback. While Thompson sampling is one of the most promising algorithms on a variety of online decision-making problems, its properties for stochastic partial monitoring have not been theoretically investigated, and the existing algorithm relies on a heuristic approximation of the posterior distribution. To mitigate these problems, we present a novel Thompson-sampling-based algorithm, which enables us to exactly sample the target parameter from the posterior distribution. Besides, we prove that the new algorithm achieves the logarithmic problem-dependent expected pseudo-regret $\mathrm{O}(\log T)$ for a linearized variant of the problem with local observability. This result is the first regret bound of Thompson sampling for partial monitoring, which also becomes the first logarithmic regret bound of Thompson sampling for linear bandits.
翻译:我们调查有限的随机部分监测,这是连续学习的一般模式,反馈有限。Thompson抽样是各种在线决策问题最有希望的算法之一,但是,在理论上尚未调查其随机部分监测的特性,而现有的算法依赖于后院分布的湿度近似值。为了缓解这些问题,我们提出了一个基于Thompson抽样的新型算法,使我们能够从后院分布中确切地抽样目标参数。此外,我们证明新的算法实现了基于对数的问题的预期假正反差$\ mathrt{O}(\log T)$,用于当地可观测的问题线性变方。这是Thompson抽样首次为部分监测而后悔,这也成为线性匪盗抽样的首个对数遗憾。