Distributional reinforcement learning (DRL), which cares about the full distribution of returns instead of just the mean, has achieved empirical success in various domains. One of the core tasks in the field of DRL is distributional policy evaluation, which involves estimating the return distribution $\eta^\pi$ for a given policy $\pi$. A distributional temporal difference (TD) algorithm has been accordingly proposed, which is an extension of the temporal difference algorithm in the classic RL literature. In the tabular case, \citet{rowland2018analysis} and \citet{rowland2023analysis} proved the asymptotic convergence of two instances of distributional TD, namely categorical temporal difference algorithm (CTD) and quantile temporal difference algorithm (QTD), respectively. In this paper, we go a step further and analyze the finite-sample performance of distributional TD. To facilitate theoretical analysis, we propose non-parametric distributional TD algorithm (NTD). For a $\gamma$-discounted infinite-horizon tabular Markov decision process with state space $S$ and action space $A$, we show that in the case of NTD we need $\wtilde O\prn{\frac{1}{\varepsilon^{2p}(1-\gamma)^{2p+2}}}$ iterations to achieve an $\varepsilon$-optimal estimator with high probability, when the estimation error is measured by the $p$-Wasserstein distance. Under some mild assumptions, $\wtilde O\prn{\frac{1}{\varepsilon^{2}(1-\gamma)^{4}}}$ iterations suffices to ensure the Kolmogorov-Smirnov distance between the NTD estimator $\hat\eta^\pi$ and $\eta^\pi$ less than $\varepsilon$ with high probability. And we revisit CTD, showing that the same non-asymptotic convergence bounds hold for CTD in the case of the $p$-Wasserstein distance.
翻译:暂无翻译