We investigate the sample complexity of learning the optimal arm for multi-task bandit problems. Arms consist of two components: one that is shared across tasks (that we call representation) and one that is task-specific (that we call predictor). The objective is to learn the optimal (representation, predictor)-pair for each task, under the assumption that the optimal representation is common to all tasks. Within this framework, efficient learning algorithms should transfer knowledge across tasks. We consider the best-arm identification problem for a fixed confidence, where, in each round, the learner actively selects both a task, and an arm, and observes the corresponding reward. We derive instance-specific sample complexity lower bounds satisfied by any $(\delta_G,\delta_H)$-PAC algorithm (such an algorithm identifies the best representation with probability at least $1-\delta_G$, and the best predictor for a task with probability at least $1-\delta_H$). We devise an algorithm OSRL-SC whose sample complexity approaches the lower bound, and scales at most as $H(G\log(1/\delta_G)+ X\log(1/\delta_H))$, with $X,G,H$ being, respectively, the number of tasks, representations and predictors. By comparison, this scaling is significantly better than the classical best-arm identification algorithm that scales as $HGX\log(1/\delta)$.
翻译:我们调查了学习多任务土匪问题最佳手臂的样本复杂性。 手臂由两个组成部分组成: 一个是跨任务共享的( 我们称之为代表), 一个是任务特定( 我们称之为预测者) 。 目标是学习每项任务的最佳( 代表、 预测者) 帕, 前提是, 最佳代表是所有任务都共有的。 在此框架内, 有效的学习算法应该传递跨任务的知识。 我们考虑固定信心中的最佳武器识别问题, 每轮中, 学习者积极选择一个任务和一个臂, 并观察相应的奖励。 我们得出具体实例的样本复杂性较低, 任何( del_ G,\ delta_ H) $- PAC 的算法都满足了 。 这样的算法可以确定最优化的表示, 概率至少为 1\ delta_ H 美元, 以及 任务的最佳预测者。 我们设计了一个 ExL- SC 算法, 其样本复杂性接近较低约束, 和比例最高为 $( G\\\\\ h) 美元, lix 的比 X 的 任务更大规模的 。 ( G_ g_ g_ g_ g_ g_ g_ g_ lax) lax) 。