We propose Banker-OMD, a novel framework generalizing the classical Online Mirror Descent (OMD) technique in online learning algorithm design. Banker-OMD allows algorithms to robustly handle delayed feedback, and offers a general methodology for achieving $\tilde{O}(\sqrt{T} + \sqrt{D})$-style regret bounds in various delayed-feedback online learning tasks, where $T$ is the time horizon length and $D$ is the total feedback delay. We demonstrate the power of Banker-OMD with applications to three important bandit scenarios with delayed feedback, including delayed adversarial Multi-armed bandits (MAB), delayed adversarial linear bandits, and a novel delayed best-of-both-worlds MAB setting. Banker-OMD achieves nearly-optimal performance in all the three settings. In particular, it leads to the first delayed adversarial linear bandit algorithm achieving $\tilde{O}(\text{poly}(n)(\sqrt{T} + \sqrt{D}))$ regret.
翻译:我们提出银行家-OMD,这是一个在网上学习算法设计中推广经典在线镜象源(OMD)技术的新框架。银行家-OMD允许算法对延迟反馈进行有力的处理,并提供实现$tilde{O}(\\sqrt{T}+\sqrt{D})美元式的遗憾界限的一般方法,在各种延迟反馈在线学习任务中达到$t$是时间跨度长度,$D$是反馈总延迟。我们展示了银行家-OMD对三种重要黑道应用的力量,包括延迟的对抗性多臂强盗(MAB),延迟的对抗性线性强盗,以及新颖的双世界最佳MAB设置。银行家-OMD在所有三个环境中都取得了近乎最佳的绩效。特别是,这导致了第一次延迟的对抗性线性线状运算达到$tilde{O}(\text{poly}(n\qrt{T} (n\qrt{+\qrt{D}(rg}))))的遗憾。