In online learning problems, exploiting low variance plays an important role in obtaining tight performance guarantees yet is challenging because variances are often not known a priori. Recently, considerable progress has been made by Zhang et al. (2021) where they obtain a variance-adaptive regret bound for linear bandits without knowledge of the variances and a horizon-free regret bound for linear mixture Markov decision processes (MDPs). In this paper, we present novel analyses that improve their regret bounds significantly. For linear bandits, we achieve $\tilde O(d^{1.5}\sqrt{\sum_{k}^K \sigma_k^2} + d^2)$ where $d$ is the dimension of the features, $K$ is the time horizon, and $\sigma_k^2$ is the noise variance at time step $k$, and $\tilde O$ ignores polylogarithmic dependence, which is a factor of $d^3$ improvement. For linear mixture MDPs with the assumption of maximum cumulative reward in an episode being in $[0,1]$, we achieve a horizon-free regret bound of $\tilde O(d \sqrt{K} + d^2)$ where $d$ is the number of base models and $K$ is the number of episodes. This is a factor of $d^{3.5}$ improvement in the leading term and $d^7$ in the lower order term. Our analysis critically relies on a novel elliptical potential `count' lemma. This lemma allows a novel regret analysis in conjunction with the peeling trick, which is of independent interest.
翻译:在网上学习问题中,利用低差异在获得严格的绩效保障方面起着重要作用,但挑战性却很大,因为差异往往不先入为主。最近,张等人(2021年)取得了显著进展,因为张等人(2021年)对线性土匪产生了差异调适的遗憾,而他们不知道差异,对线性混合物Markov(MDPs)决策程序也产生了无地平移的遗憾。在本文中,我们提出了新的分析,这些分析大大改善了他们的遗憾界限。对于线性土匪,我们取得了美元(d=1.5 ⁇ sqrt=k=sum_sum_k_2}+d ⁇ 2美元,其中美元是地平价,美元是时间范围,而美元=k_k_k_k_2美元是时间段的噪音差异,而美元忽略了多元依赖性依赖性,这是一个改善因素。对于线性混合物,我们假设最高累积奖赏额为$[$0,1美元],我们实现了无地平值的利息。