In this paper, we first study the problem of combinatorial pure exploration with full-bandit feedback (CPE-BL), where a learner is given a combinatorial action space $\mathcal{X} \subseteq \{0,1\}^d$, and in each round the learner pulls an action $x \in \mathcal{X}$ and receives a random reward with expectation $x^{\top} \theta$, with $\theta \in \mathbb{R}^d$ a latent and unknown environment vector. The objective is to identify the optimal action with the highest expected reward, using as few samples as possible. For CPE-BL, we design the first {\em polynomial-time adaptive} algorithm, whose sample complexity matches the lower bound (within a logarithmic factor) for a family of instances and has a light dependence of $\Delta_{\min}$ (the smallest gap between the optimal action and sub-optimal actions). Furthermore, we propose a novel generalization of CPE-BL with flexible feedback structures, called combinatorial pure exploration with partial linear feedback (CPE-PL), which encompasses several families of sub-problems including full-bandit feedback, semi-bandit feedback, partial feedback and nonlinear reward functions. In CPE-PL, each pull of action $x$ reports a random feedback vector with expectation of $M_{x} \theta $, where $M_x \in \mathbb{R}^{m_x \times d}$ is a transformation matrix for $x$, and gains a random (possibly nonlinear) reward related to $x$. For CPE-PL, we develop the first {\em polynomial-time} algorithm, which simultaneously addresses limited feedback, general reward function and combinatorial action space, and provide its sample complexity analysis. Our empirical evaluation demonstrates that our algorithms run orders of magnitude faster than the existing ones, and our CPE-BL algorithm is robust across different $\Delta_{\min}$ settings while our CPE-PL algorithm is the only one returning correct answers for nonlinear reward functions.
翻译:在本文中, 我们首先研究一个问题, 以完整 Bandit 反馈( CPE- BL) 来组合纯勘探 D, 给学习者一个组合动作空间 $\ mathcal{ X}\ = 0. 1\ d$, 在每一回合中, 学习者会拉出一个动作 $x \ in mathcal{ X} $, 并且收到一个随机奖赏, 期待 $x ; 以 ralthbus 快速反馈( CPE- Rdddd$ ), 一个潜在的和未知的环境矢量。 目标是用最预期的奖赏来确定最佳行动, 尽可能少的样本。 对于 CPE- Blx, 我们设计第一个 IPE 的 pool- mal importrial =x, 其样本复杂性( 在逻辑值范围内), 和 以普通的 $%x, 和亚氏 的 提供 最起码的回报 。