We study the problem of regret minimization in a multi-armed bandit setup where the agent is allowed to play multiple arms at each round by spreading the resources usually allocated to only one arm. At each iteration the agent selects a normalized power profile and receives a Gaussian vector as outcome, where the unknown variance of each sample is inversely proportional to the power allocated to that arm. The reward corresponds to a linear combination of the power profile and the outcomes, resembling a linear bandit. By spreading the power, the agent can choose to collect information much faster than in a traditional multi-armed bandit at the price of reducing the accuracy of the samples. This setup is fundamentally different from that of a linear bandit -- the regret is known to scale as $\Theta(\sqrt{T})$ for linear bandits, while in this setup the agent receives a much more detailed feedback, for which we derive a tight $\log(T)$ problem-dependent lower-bound. We propose a Thompson-Sampling-based strategy, called Weighted Thompson Sampling (\WTS), that designs the power profile as its posterior belief of each arm being the best arm, and show that its upper bound matches the derived logarithmic lower bound. Finally, we apply this strategy to a problem of control and system identification, where the goal is to estimate the maximum gain (also called $\mathcal{H}_\infty$-norm) of a linear dynamical system based on batches of input-output samples.
翻译:我们研究在多武装土匪设置中最小化遗憾的问题, 在多武装土匪设置中, 代理商可以通过分散通常只分配到一个手臂的资源在每轮中玩多个手臂。 在每次迭代中, 代理商选择一个正常的权力配置, 并接收一个高斯矢量作为结果, 每个样本的未知差异与分配给它的权力成反比。 奖赏相当于权力配置和结果的线性组合, 类似于线性土匪。 通过扩展权力, 代理商可以选择收集信息的速度比传统的多武装土匪快得多, 以降低样品精度为代价。 这种设置与线性土匪的设置有根本不同 -- 众所周知, 每个样本的常规配置是 $\ Theta (sqrt{T}), 而在这个设置中, 代理商会收到一个更详尽的反馈, 我们从一个非常紧的 $( T) 美元与问题相关的小土匪。 我们提议一个基于汤普森的策略, 叫做 Weight Thompson Samliing (WTH\\\S), 这组的设置与一个基本不同的线性战略, 显示其最精度的直径的直径直径直线性系统, 的直径的直径直线性估算, 。