In online learning problems, exploiting low variance plays an important role in obtaining tight performance guarantees yet is challenging because variances are often not known a priori. Recently, a considerable progress has been made by Zhang et al. (2021) where they obtain a variance-adaptive regret bound for linear bandits without knowledge of the variances and a horizon-free regret bound for linear mixture Markov decision processes (MDPs). In this paper, we present novel analyses that improve their regret bounds significantly. For linear bandits, we achieve $\tilde O(d^{1.5}\sqrt{\sum_{k}^K \sigma_k^2} + d^2)$ where $d$ is the dimension of the features, $K$ is the time horizon, and $\sigma_k^2$ is the noise variance at time step $k$, and $\tilde O$ ignores polylogarithmic dependence, which is a factor of $d^3$ improvement. For linear mixture MDPs, we achieve a horizon-free regret bound of $\tilde O(d^{1.5}\sqrt{K} + d^3)$ where $d$ is the number of base models and $K$ is the number of episodes. This is a factor of $d^3$ improvement in the leading term and $d^6$ in the lower order term. Our analysis critically relies on a novel elliptical potential `count' lemma. This lemma allows a peeling-based regret analysis, which can be of independent interest.
翻译:在网上学习问题中,利用低差异在获得严格的绩效保障方面起着重要作用,但挑战性却很大,因为差异往往不先入为主。最近,张等人(2021年)取得了相当大的进展,他们为线性土匪获得了差异调适的遗憾,而没有了解差异,也没有看到线性混合物马尔科夫(MDPs)决策程序(MDPs)的无地平线悔恨。在本文中,我们提出了新的分析,这些分析大大改善了他们的遗憾界限。对于线性土匪,我们实现了美元(d ⁇ 1.5 ⁇ sqrt_k_2}+d ⁇ 2美元,其中美元是特征的层面,美元是时间范围,而美元是线性土匪,而美元是时间段差异,而美元则忽略了多面依赖性。对于线性混合物,我们实现了无地平线性遗憾,O(d ⁇ _g_k_2}+d ⁇ 2美元(d ⁇ 2美元)+d ⁇ 2美元,其中的美元是时间范围,而我们的基本分析是Rixxxxxximal_al_ma_ma_ma_ma_ma_ma_ma_maxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx