Consider a decision-maker that can pick one out of $K$ actions to control an unknown system, for $T$ turns. The actions are interpreted as different configurations or policies. Holding the same action fixed, the system asymptotically converges to a unique equilibrium, as a function of this action. The dynamics of the system are unknown to the decision-maker, which can only observe a noisy reward at the end of every turn. The decision-maker wants to maximize its accumulated reward over the $T$ turns. Learning what equilibria are better results in higher rewards, but waiting for the system to converge to equilibrium costs valuable time. Existing bandit algorithms, either stochastic or adversarial, achieve linear (trivial) regret for this problem. We present a novel algorithm, termed Upper Equilibrium Concentration Bound (UECB), that knows to switch an action quickly if it is not worth it to wait until the equilibrium is reached. This is enabled by employing convergence bounds to determine how far the system is from equilibrium. We prove that UECB achieves a regret of $\mathcal{O}(\log(T)+\tau_c\log(\tau_c)+\tau_c\log\log(T))$ for this equilibrium bandit problem where $\tau_c$ is the worst case approximate convergence time to equilibrium. We then show that both epidemic control and game control are special cases of equilibrium bandits, where $\tau_c\log \tau_c$ typically dominates the regret. We then test UECB numerically for both of these applications.
翻译:考虑一个能够从美元中选出一个用于控制未知系统的“K$”动作的决策者, 以美元为转折。 动作被解释为不同的配置或政策。 保持相同的动作, 系统会逐渐地融合到一个独特的平衡, 作为此动作的函数 。 系统动态对于决策者来说是未知的, 它只能在每次转折的结尾看到吵闹的奖励。 决策者想要在转折的$T 中最大限度地获得累积的奖励。 学习什么平衡能带来更高的回报, 但等待系统会趋同到平衡, 花费宝贵的时间。 现有的土匪算法, 无论是随机还是对抗算法, 都实现了线性( 三角) 平衡。 我们提出了一个新的算法, 叫做“ Qequilibrium 集中( UECB), 它知道在不值得等到平衡达到的时候, 就能快速转换一个动作。 这一点可以通过使用最差的趋同线来决定这个系统离平衡有多远。 然后我们证明 UCB会取得特殊的“ $alalalal_ log_ rum_ colog\\\\\ broal c c) roal c) que c) que c c c crodustrate c.</s>