The Causal Bandit is a variant of the classic Bandit problem where an agent must identify the best action in a sequential decision-making process, where the reward distribution of the actions displays a non-trivial dependence structure that is governed by a causal model. Methods proposed for this problem thus far in the literature rely on exact prior knowledge of the full causal graph. We formulate new causal bandit algorithms that no longer necessarily rely on prior causal knowledge. Instead, they utilize an estimator based on separating sets, which we can find using simple conditional independence tests or causal discovery methods. We show that, given a true separating set, for discrete i.i.d. data, this estimator is unbiased, and has variance which is upper bounded by that of the sample mean. We develop algorithms based on Thompson Sampling and UCB for discrete and Gaussian models respectively and show increased performance on simulation data as well as on a bandit drawing from real-world protein signaling data.
翻译:Causal Bandit是典型的土匪问题的变体,在这种变体中,代理人必须确定一个连续决策过程中的最佳行动,行动的奖赏分配显示的是非三角依赖结构,这种结构受因果模式的制约。迄今为止在文献中为这一问题提出的方法依赖于对全因果图的准确事先了解。我们制定了新的因果盗匪算法,这种算法不必再依赖先前的因果知识。相反,它们使用一个基于分离的测算器,我们可以使用简单的有条件独立测试或因果发现方法找到。我们显示,根据真实的分离数据集,对于离散的如.d.数据来说,这个测算器是不带偏见的,而且存在差异,受抽样平均值的比重。我们分别根据Thompson Sampling和UCB为离散和高斯模型制定算法,并显示模拟数据以及从真实世界蛋白质信号数据中提取的浮标的浮标的测算法的性提高。